Thursday, November 17, 2016

Percona Cluster Installation

Installation of this cluster was performed using standard Redhat packages for simplicity.
If it does not already exist, you will need to create a “mysql” account in the Operating System to run MySQL:
adduser mysql
usermod -s /sbin/nologin mysql
Create empty directories to contain the files that will be necessary:

mkdir /u01/mysql_setup
mkdir /u01/mysql_setup/tmp
mkdir /u01/mysql_setup/logs
mkdir /u01/mysql_setup/data
Copy a template of the my.cnf file to the installation directory:
cp my.cnf /u01/mysql_setup
Set proper permissions on all the files and directories:

chown mysql:mysql /u01/mysql_setup/ -R
Install the Percona XTRADB Cluster Shared RPM:

rpm -i Percona-XtraDB-Cluster-shared-5.5.31-
If there is a problem with the above, you may need to install OpenSSL (haven’t been needing this):
yum install openssl098e.x86_64
Install Perl-DBI:
yum install perl-DBI
Install the Percona XTRADB Cluster Client RPM:
rpm -i Percona-XtraDB-Cluster-client-5.5.31-
Install the Percona XTRADB Cluster Galera RPM:
rpm -i Percona-XtraDB-Cluster-galera-2.6-1.152.rhel5.x86_64.rpm
If it is not already installed, you will need Perl:
yum --skip-broken install perl-DBD-mysql
Percona XTRABackup will be required for backups and SST/IST functions:
rpm -i percona-xtrabackup-2.1.4-656.rhel5.x86_64.rpm
If there is any problem installing the backup tool, you may need a Perl module:
yum install perl-Time-HiRes.x86_64
yum install –-skip-broken perl-DBD-MySQL.x86_64
Install the Percona XTRADB Cluster Server RPM:
rpm -i --force Percona-XtraDB-Cluster-server-5.5.31-
If there is a problem needing socat, grab the RPM from Dropbox and install:
yum install compat-readline5.x86_64
rpm -i socat-
Do some cleanup:
mv /etc/my.cnf /etc/my.cnf.old
ln -s /u01/mysql_setup/my.cnf /etc/my.cnf
Create a new data directory containing the system database:
/usr/bin/mysql_install_db --datadir=/u01/mysql_setup/data
Edit the my.cnf files and set appropriate values for datadir, node IPs, node names, etc.
nano /u01/mysql_setup/my.cnf
If this is the first node you are starting, it must be bootstrapped:
/etc/init.d/mysql bootstrap-pxc
Setup a root password for the cluster:
/usr/bin/mysqladmin -u root password 'mysql123'
Once MySQL is up and running on the bootstrap node, execute the following on any node to complete the configuration of the cluster. Note the passwords below have been obfuscated.
If the first node is already running bootstrapped, you may start additional nodes (one at a time!) with the following:

/etc/rc.d/init.d/mysql start
If you get an SST error (broken pipe) when starting the additional nodes, you may have installed a newer version of xtrabackup. In that case you must set the SST method = xtrabackup-v2 or you’ll get this error. See below if needed:
wsrep_sst_method = xtrabackup-v2
Install xinetd to allow TCP/IP connections to port 9200 to return cluster node status:
yum install xinetd
Edit the /etc/services file:
$ nano /etc/services
Comment out any line with port 9200 and add the following line:
mysqlchk 9200/tcp # MySQL check
Restart xinetd:
service xinetd start
XINETD Considerations
The above should setup a file in /etc/xinetd.d that will point to the /usr/bin/clustercheck bash script. The xinetd.d config file configures the check for requests coming to port 9200. Upon such a connection, xinetd will call the /usr/bin/clustercheck script which queries the database using SHOW GLOBAL STATUS commands to determine whether a node is ready to receive traffic. On a successful response, the script will return a HTTP formatted response header with code 200 and a message stating whether the node is synced or not.
Unfortunately, there is a bug in /usr/bin/clustercheck script that can cause some connections from remote hosts to fail. The fix is to add a “sleep 0.1” to the script in four places as noted below:
# Script to make a proxy (ie HAProxy) capable of monitoring Percona XtraDB Cluster nodes properly
# Authors:
# Raghavendra Prabhu <>
# Olaf van Zandwijk <>
# Based on the original script from Unai Rodriguez and Olaf (
# Grant privileges required:
# GRANT PROCESS ON *.* TO 'clustercheckuser'@'localhost' IDENTIFIED BY 'clustercheckpassword!';
if [[ $1 == '-h' || $1 == '--help' ]];then
echo "Usage: $0 <user> <pass> <available_when_donor=0|1> <log_file>"
#Timeout exists for instances where mysqld may be hung
# Perform the query to check the wsrep_local_state
WSREP_STATUS=`mysql -nNE --connect-timeout=$TIMEOUT --user=${MYSQL_USERNAME} --password=${MYSQL_PASSWORD} \
-e "SHOW STATUS LIKE 'wsrep_local_state';" 2>${ERR_FILE} | tail -1 2>>${ERR_FILE}`
if [[ "${WSREP_STATUS}" == "4" ]] || [[ "${WSREP_STATUS}" == "2" && ${AVAILABLE_WHEN_DONOR} == 1 ]]
# Percona XtraDB Cluster node local state is 'Synced' => return HTTP 200
# Shell return-code is 0
echo -en "HTTP/1.1 200 OK\r\n"
echo -en "Content-Type: text/plain\r\n"
echo -en "Connection: close\r\n"
sleep 0.1
echo -en "Content-Length: 40\r\n"
echo -en "\r\n"
sleep 0.1
echo -en "Percona XtraDB Cluster Node is synced.\r\n"
exit 0
# Percona XtraDB Cluster node local state is not 'Synced' => return HTTP 503
# Shell return-code is 1
echo -en "HTTP/1.1 503 Service Unavailable\r\n"
echo -en "Content-Type: text/plain\r\n"
echo -en "Connection: close\r\n"
sleep 0.1
echo -en "Content-Length: 44\r\n"
echo -en "\r\n"
sleep 0.1
echo -en "Percona XtraDB Cluster Node is not synced.\r\n"
exit 1
Cluster Install Validation
The cluster should now be up and operational. You can test each node to see if they are part of the cluster by executing the following command:
Test that all nodes are up and running properly via HTTP response:
This will return the following result if the node you are checking is synced with the rest of the cluster.
Percona XtraDB Cluster Node is synced.
Install Systat so we get iostat, vmstat, and sar.
yum install sysstat
This completes the installation documentation for your Percona MySQL Cluster. 
Load Balancing The Nodes
Load Balancer Node Installation via Hardware
This is the typical solution. There is another team that is responsible for configuration of the Load Balancer. Typically, all we do is notify them of what we need. Below is the information we typically provide:
Sticky Session On
probe http probe_http_9200
port 9200
interval 10
passdetect interval 31
passdetect count 1
expect status 200
Be sure to test once this is complete. Ask for the VIP and take a node down one at a time while performing MySQL connections via command line to the VIP. To determine which node you hit perform the following within the MySQL CLI:
SELECT @@hostname;
Do this enough times to cycle through the hosts, you should not get a failed connection if the load balancer is configured properly.
The load balancer should verify checks on both ports 3306 with TCP/IP and port 9200 via HTTP header which should return code 200 on success. Please ensure that the LB does not check only port 3306 for a response. It is entirely possible for MySQL to be responsive yet the node not be synced and not ready to receive traffic from the LB.
Load Balancer Node Installation via HAProxy
Most of our installations utilize a hardware load balancer.  If we are ever requested to use a software load balancer, HAProxy is the tool we would choose.  Below are installation instructions for it.
Use the apt-get command to install HAProxy:
yum install haproxy
We need to enable HAProxy to be started by the init script:
nano /etc/default/haproxy
Set the ENABLED option to 1:
To check if this change is done properly execute the init script of HAProxy without any parameters. You should see the following:
service haproxy
Usage: /etc/init.d/haproxy {start|stop|reload|restart|status}
HAProxy is now installed, and can be configured. The configuration file is located at /etc/haproxy/haproxy.cfg. A complete copy of the current configuration file can be found at the end of this document. Once HAProxy is configured, it can be started via the standard init script:
/etc/init.d/haproxy start
At this point the HAProxy installation is complete, and all traffic routed on port 3306 to the HAProxy machine will be routed in a round robin fashion to the Percona Cluster nodes. HAProxy can automatically detect if a node has a problem, and automatically removes it from the pool. It will also automatically add a node back into the pool if it comes back up clean and rejoins the cluster.
Be sure to test once this is complete.  Ask for the VIP and take a node down one at a time while performing MySQL connections via command line to the VIP.  To determine which node you hit perform the following within the MySQL CLI:
SELECT @@hostname;
Do this enough times to cycle through the hosts, you should not get a failed connection if the load balancer is configured properly.
General Cluster Information
There are generally a total of three (3) nodes in this cluster, along with a single load balancer. As with all Percona Cluster installations, this is the minimum number of nodes required for cluster. This does not mean that the cluster is down if one or more nodes are lost - it only means that at least three nodes are required for normal cluster operation.

Thursday, October 23, 2014

When not to use Thread pool in MariaDB?

When not to use Thread pool in MariaDB?
very bursty workload (long periods of inactivity mixed with short periods of high activity by many users), and also you cannot tolerate delays coming from thread creation throttling. Even in this situation, performance can be improved by tweaking the way threads are retired, e.g. thread_pool_idle_timeout on Unix, or thread_pool_min_threads on Windows.

many concurrent, long, non-yielding queries. Non-yielding here means that a thread never waits while executing a query, or does not indicate waits to threadpool. Such workloads are mostly used in data warehouse scenarios. In this case long running queries will delay execution of other queries, and the delay is fixed only with stall detection/preemption mechanism (s. description thread_pool_stall_limit). You can still connect trough the extra-port TCP/IP port.

you rely on the fact that simple queries always finish quickly, no matter how loaded you database server is. With loaded threadpool, queries might be queued for later execution, such that even a simple SELECT 1, might take more time than with thread-per-connection.

PostgresSQL Monitoring Locks and Duplicate Indexes

SELECT AS blocked_pid,
a.usename AS blocked_user,
ka.current_query AS blocking_statement,
now() - ka.query_start AS blocking_duration, AS blocking_pid,
ka.usename AS blocking_user,
a.current_query AS blocked_statement,
now() - a.query_start AS blocked_duration
FROM pg_catalog.pg_locks bl
JOIN pg_catalog.pg_stat_activity a ON a.procpid =
JOIN pg_catalog.pg_locks kl ON kl.transactionid = bl.transactionid AND !=
JOIN pg_catalog.pg_stat_activity ka ON ka.procpid =
WHERE NOT bl.granted;

Setting log_lock_waits and related deadlock_timeout parameters help to even analyze intermittent locks that are causing problems only sometimes. The slow lock acquisition will appear in the database logs for later analysis.

PostgreSQL Duplicate Indexes

Duplicate Indexes are serious pain in PostgreSQL Infrastruture. It is always cool to 
drop them once found.... The script below helps you to find duplicate indexes in PostgreSQL

SELECT pg_size_pretty(sum(pg_relation_size(idx))::bigint) AS size,
       (array_agg(idx))[1] AS idx1, (array_agg(idx))[2] AS idx2,
       (array_agg(idx))[3] AS idx3, (array_agg(idx))[4] AS idx4
    SELECT indexrelid::regclass AS idx, (indrelid::text ||E'\n'|| indclass::text ||E'\n'|| indkey::text ||E'\n'||
                                         coalesce(indexprs::text,'')||E'\n' || coalesce(indpred::text,'')) AS KEY
    FROM pg_index) sub
ORDER BY sum(pg_relation_size(idx)) DESC;

MySQL - Lock wait timeout exceeded - try restarting transaction

MySQL - Lock wait timeout exceeded - try restarting transaction

Make sure the database tables are using InnoDB storage engine and READ-COMMITTED transaction isolation level.
If the the above configuration is correct then please try to increase the database server innodb_lock_wait_timeout variable 
Restart the MySQL database service for the configuration to take place.
If the steps above don't help, please run these queries below to double-check the configuration:
show variables like '%wait_timeout%';
show variables like '%tx_isolation%';
SELECT @@GLOBAL.tx_isolation,

Troubleshooting blocking in MariaDB

Even with thread_pool_max_threads set high (recall the default for it is 500), if global locks are involved it is possible to block the entire pool. Imagine a situation where one client performs a FLUSH TABLES WITH READ LOCK and then pauses. If 500 other clients start a write operation, the maximum number of threads allowed in the pool is reached and then it is impossible to UNLOCK TABLES anymore. One way to solve the problem would be to set thread_pool_max_threads to a very high value, but this might be less than ideal, and negatively affect performance.

To workaround such a situation, MariaDB permits you to use a dedicated admin connection. To allow a dedicated admin connection, set the extra_port server variable to the TCP port for this connection (this must be different from the usual port), and connect using this port as a superuser. Once connected, you can either increase thread_pool_max_threads or kill the offending connection – in the example above that would be the connection that holds the global lock (such a connection would be in the 'sleep' state).

PostgreSQL Performance Tuning params

max_connections =  This option sets the maximum number of database backend to have at any one time. Use this feature to ensure that you do not launch so many backends that you begin swapping to disk and kill the performance of all the children. Depending on your application it may be better to deny the connection entirely rather than degrade the performance of all of the other children.

shared_buffers =  Editing this option is the simplest way to improve the performance of your database server. The default is pretty low for most modern hardware. General wisdom says that this should be set to roughly 25% of available RAM on the system. Like most of the options I will outline here you will simply need to try them at different levels (both up and down ) and see how well it works on your particular system. Most people find that setting it larger than a third starts to degrade performance.

effective_cache_size =  This value tells PostgreSQL's optimizer how much memory PostgreSQL has available for caching data and helps in determing whether or not it use an index or not. The larger the value increases the likely hood of using an index. This should be set to the amount of memory allocated to shared_buffers plus the amount of OS cache available. Often this is more than 50% of the total system memory.

work_mem =  This option is used to control the amount of memory using in sort operations and hash tables. While you may need to increase the amount of memory if you do a ton of sorting in your application, care needs to be taken. This isn't a system wide parameter, but a per operation one. So if a complex query has several sort operations in it it will use multiple work_mem units of memory. Not to mention that multiple backends could be doing this at once. This query can often lead your database server to swap if the value is too large. This option was previously called sort_mem in older versions of PostgreSQL.

max_fsm_pages = This option helps to control the free space map. When something is deleted from a table it isn't removed from the disk immediately, it is simply marked as "free" in the free space map. The space can then be reused for any new INSERTs that you do on the table. If your setup has a high rate of DELETEs and INSERTs it may be necessary increase this value to avoid table bloat.
fsync = This option determines if all your WAL pages are fsync()'ed to disk before a transactions is committed. Having this on is safer, but can reduce write performance. If fsync is not enabled there is the chance of unrecoverable data corruption. Turn this off at your own risk.

commit_delay  and commit_siblings = These options are used in concert to help improve performance by writing out multiple transactions that are committing at once. If there are commit_siblings number of backends active at the instant your transaction is committing then the server waiting commit_delay microseconds to try and commit multiple transactions at once.

random_page_cost = random_page_cost controls the way PostgreSQL views non-sequential disk reads. A higher value makes it more likely that a sequential scan will be used over an index scan indicating that your server has very fast disks.
Increase, in postgresql.conf, the value of shared_buffers.
Increase, in postgresql.conf, the value of effective_cache_size.
Run VACUUM on database tables frequently.
Run ANALYZE on the database periodically.
Don't run VACUUM FULL too often.
Move the WAL files to a separate physical disk.
Increase, in postgresql.conf, the value of sort_mem.
Reduce, in postgresql.conf, the value of random_page_cost.
Increase, in postgresql.conf, the value of max_fsm_pages and max_fsm_relations


Wednesday, August 8, 2012

Simple alias - to remember mysql commands and process..

alias -p (your command)

Alias examples ($ represents the command line prompt):
$ alias myalias='alias | grep mysql'
$ alias myenv='env | grep mysql '
$ alias mystat='mysqladmin -u root -p status'
$ alias myestat='mysqladmin -u root -p extended-status'
$ alias myping='mysqladmin -u root -p ping'
$ alias myrun='ps -aux | grep mysql | grep -v grep'
$ alias mysql='mysql -u root -p'
$ alias mystop='mysqladmin -u root -p shutdown'
$ alias mysafe='nohup /../bin/mysqld_safe -- defaults-file=./my.cnf & '

Could not parse relay log event entry.

If a slave show the below error then

Show slave status :

Last_error: Could not parse relay log event entry. The possible reasons are: the master’s binary log is corrupted (you can check this by running ‘mysqlbinlog’ on the binary log), the slave’s relay log is corrupted (you can check this by running ‘mysqlbinlog’ on the relay log), a network problem, or a bug in the master’s or slave’s MySQL code. If you want to check the master’s binary log or slave’s relay log, you will be able to know their names by issuing ‘SHOW SLAVE STATUS’ on this slave.

Solution :

1) If the slaves relay-log is corrupted, the run the change master to get the slave in sync with master.

2) we have to import a fresh dump from master to get the slave in sync with master if the master’s binlog is corrupted.

Tuesday, July 5, 2011

MYSQL : Loading data into MySQL in quicker way

MYSQL : Loading data into MySQL in quicker way
Disabling key checks can help speed up the import.
Whenever we load data into a MySQL database the following settings can help speed up the import.
innodb_flush_log_at_trx_commit = 0
innodb_support_xa = 0
If the server doesn’t have replication configured and if the log-bin is enabled,  disable it using SET SQL_LOG_BIN=0 or manually comment log-bin parameter in my.cnf. Similiarly we can disable other logs will also help. These parameters help in reducing frequent syncs to disk…

MYSQL : mysqld server - SOCKET ERROR

MYSQL : mysqld server - SOCKET ERROR

ERROR 2002 (HY000): Can’t connect to local MySQL server through socket '/usr/local/mysql/mysql.sock'

Following are the scenarios for getting socket error - 

1. Check whether mysqld server is running or not, using the below command
ps -ef | grep mysql

If mysqld is running, then check the socket path in the configuration file ( ie.. my.cnf )and give the correct path when connecting.

2. Check the file permission of that socket file. If the file doesn’t have mysql permission, just change permission to mysql using the below command.

chown mysql:mysql /usr/local/mysql/mysql.sock

3. If the socket file is missing then you have to restart your mysqld server. Its automatically create new socket...


Monday, June 27, 2011

MYSQL : Listing summary of all Instance Schemas including disk size

MYSQL : Listing summary of all Instance Schemas including disk size

#setting session variable for storing schemaname

SET @schemaname = IFNULL(@schemaname,DATABASE());

#one row schema usage 

SELECT   table_schema,
         SUM(data_length+index_length)/1024/1024 AS total_mb,
         SUM(data_length)/1024/1024 AS data_mb,
         SUM(index_length)/1024/1024 AS index_mb,
         COUNT(*) AS tables,
         CURDATE() AS today
FROM     information_schema.tables
WHERE    table_schema=@schemaname
GROUP BY table_schema;

#Display Engine/Collation Summary

SELECT   table_schema,engine,table_collation,
         COUNT(*) AS tables
FROM     information_schema.tables
WHERE    table_schema=@schemaname
GROUP BY table_schema,engine,table_collation;

#Display Schema Table Usage for each schema

SELECT @schemaname as table_schema, CURDATE() AS today;
SELECT   if(length(table_name)>20,concat(left(table_name,18),'..'),table_name) AS table_name,
         engine,row_format as format, table_rows, avg_row_length as avg_row,
         round((data_length+index_length)/1024/1024,2) as total_mb, 
         round((data_length)/1024/1024,2) as data_mb, 
         round((index_length)/1024/1024,2) as index_mb
FROM     information_schema.tables 
WHERE    table_schema=@schemaname

Friday, June 10, 2011



shell> mysql -N -e "select CONCAT('REPAIR TABLE ', TABLE_SCHEMA, '.', TABLE_NAME, ';') FROM information_schema.tables WHERE ENGINE='MyISAM' AND TABLE_SCHEMA!='mysql' AND TABLE_SCHEMA!='information_schema';" | mysql -uuser -ppassword

Monday, May 16, 2011

MySQL: Correct problems in corrupted MyISAM, ARCHIVE and CSV tables

Correct problems in corrupted MyISAM, ARCHIVE and CSV tables

Following are the steps to recover corrupted tables - for above mentioned engine -

CHECK TABLES table1 ,table2;