Pages

Monday, August 30, 2010

MySQL: Architecture

MySQL operates in a networked environment using a client/server architecture. In other words, a central program acts as a server, and various client programs connect to the server to make requests. A MySQL installation has the following major components:

* MySQL Server, or mysqld, is the database server program. The server manages access to the actual databases on disk and in memory. MySQL Server is multi-threaded and supports many simultaneous client connections. Clients can connect via several connection protocols. For managing database contents, MySQL Server features a modular architecture that supports multiple storage engines that handle different types of tables (for example, it provides both transactional and non-transactional tables).

mysqld comes in several configurations. MySQL Max distributions contain a server named mysqld-max that includes features that are not built into the non-Max version, such as support for additional storage engines. On Windows, the mysqld-nt and mysql-max-nt servers provide support for named-pipe connections on Windows NT, 2000, XP, and 2003. If a given installation includes multiple server programs, you pick one to run from among those available.

The exact feature configuration of MySQL Server may change over time, so whenever you download a new version, it’s wise to check the documentation.

As you are reading this article please keep in mind the difference between a server and a host. The server is software (the MySQL server program mysqld). Server characteristics include its version number, whether certain features are included or excluded, and so forth. The host is the physical machine on which the server program runs. Host characteristics include its hardware configuration, the operating system running on the machine, its network addresses, and so forth.

* Client programs. These are programs that you use for communicating with the server to manipulate the information in the databases that the server manages. MySQL AB provides several client programs. The following list describes a few of them:
o MySQL Query Browser and MySQL Administrator are graphical front ends to the server.
o mysql is a command-line program that acts as a text-based front end for the server. It’s used for issuing queries and viewing the results interactively from a terminal window.
o Other command-line clients include mysqlimport for importing data files, mysqldump for making backups, mysqladmin for server administration, and mysqlcheck for checking the integrity of the database files.
* MySQL non-client utilities. These are programs that act independently of the server. They do not operate by first establishing a connection to the server. myisamchk is an example. It performs table checking and repair operations. Another program in this category is myisampack, which creates compressed read-only versions of MyISAM tables. Both utilities operate by accessing MyISAM table files directly, independent of the mysqld database server.

MySQL runs on many varieties of Windows, UNIX, and Linux, but client/server communication is not limited to environments where all computers run the same operating system. Client programs can connect to a server running on the same host or a different host, and the client and server host need not have the same operating system. For example, client programs can be used on Windows to connect to a server that is running on Linux.

Friday, August 27, 2010

MYSQL:CREATING START & STOP SERVICE ON WINDOWS (XP)

MYSQL:CREATING START & STOP SERVICE ON WINDOWS (XP)
--------------------------------------------------

installing an service
"D:\MySQL3306test\bin\mysqld" --install MYSQL3306 --defaults-file="D:\MySQL3306test\my.ini"

removing an service
"D:\MySQL3306test\bin\mysqld" --remove MYSQL3306

Cheers!!!!!

MYSQL:Display all Rows in COMMAs SEPERATED

MYSQL:Display all Rows in COMMAs SEPERATED


To display all the rows as single column by comma separated. Below is the example for your reference

mysql> select * from hi_1;
+------+--------------------+
| i | j |
+------+--------------------+
| 2 | NULL |
| 2 | dh |
| 1 | edfsdh |
| 1 | edfdsfddfsdh |
| 1 | edfdsfddfioioiisdh |
+------+--------------------+
5 rows in set (0.00 sec)

mysql> select hi_1.i,group_concat(ifnull(hi_1.j,'NA') SEPARATOR ',') from hi_1 group by i;
+------+-------------------------------------------------+
| i | group_concat(ifnull(hi_1.j,'NA') SEPARATOR ',') |
+------+-------------------------------------------------+
| 1 | edfsdh,edfdsfddfsdh,edfdsfddfioioiisdh |
| 2 | NA,dh |
+------+-------------------------------------------------+
2 rows in set (0.00 sec)

mysql>





Cheers!!!

Thursday, August 26, 2010

MYSQL:Installation Steps

MYSQL:Installation Steps



1.Add a login user and group for mysqld to run as:
root@stageserver:~# groupadd mysql
root@stageserver:~# useradd -g mysql mysql

2.Choose the directory under which you want to install the MYSQL. And change the location current MYSQL directory
root@stageserver:~# cd /opt

3.Unzip the package which creates the installation directory. Then create a symbolic link to that directory:
Unzip-
gunzip < /home/stageserver/mysql-advanced-gpl-5.1.45-linux-i686-glibc23.tar.gz | tar xvf - Symbolic Link- ln -s /opt/mysql-advanced-gpl-5.1.45-linux-i686-glibc23 /usr/local/mysql Change location into the installation directory: i.e..:/usr/loca/mysql cd /usr/local/mysql 



4.You should add the full path name of this directory to your PATH environment variable so that your shell finds the MySQL programs properly root@stageserver:/usr/local/mysql#PATH=$PATH:/usr/local/mysql/bin Include the same in bash profile file (.profile)– PATH=$PATH:/usr/local/mysql/bin


5.Change ownership of the directory to “mysql” user group. root@stageserver:/usr/local/# chown -R mysql . root@stageserver:/usr/local/# chgrp -R mysql . 


 6.If you have not installed MySQL before, you must create the MySQL data directory and initialize the grant tables: root@stageserver:/usr/local/mysql# scripts/mysql_install_db –user=mysql


 7.Restart the server root@stageserver:~# reboot


 8.Run the following commands as root in the installation directory to change the ownership root@stageserver:/usr/local/mysql# chown -R root . root@stageserver:/usr/local/mysql# chown -R mysql data


 9.To Start mysql automatically in startup by editing file /etc/rc.local and adding the below line at the last before the line “exit 0” /bin/sh -c 'cd /usr/local/mysql; ./bin/mysqld_safe --user=mysql &' please include this line bash profile - .profile


 10.Starting the server form the /usr/local/mysql directory 


 11.Secure the installation by running the below command root@stageserver:/usr/local/mysql#bin/mysql_secure_installation 


 12.Check the mysql version by using following command root@stageserver:# bin/mysqladmin version -uroot -p Enter password: enter your password here This should result you mysql version which you have installed Server version 5.1.45-enterprise-gpl-advanced


 13.Verify that you can shut down the server: root@stageserver:/usr/local/mysql#bin/mysqladmin -u root shutdown –p 


 14. Verify that you can start the server again. Do this by using mysqld_safe or by invoking mysqld directly. For example: root@stageserver:/usr/local/mysql# bin/mysqld_safe --user=mysql --log & 


 15 .Reboot the system and see if mysql is automatically started by checking the mysql version again as step 13. 16.Place your my.cnf in the /usr/local/mysql directory






Cheers!!!!!!

Wednesday, August 25, 2010

MySQL:Databases DUMP with only limit records

MYSQL:TAKING MYSQL DATABASE BACKUP ONLY LIMIT RECORDS -




./mysqldump -uXXXX -pXXXXX --host=hostIP -P3306 --opt --where="1 limit 10000" --all-databases > /home/anilalpati/Desktop/dbdump.sql




Cheers!!!

Changing mysql data directory to protect db corruption

Changing mysql data directory to protect db corruption


MySQL data are located under /var/lib/mysql by default, which could cause a problem to corrupt huge database.


Here the steps to change the data directory


Step1 :
To stop mysql deamon, simply drop the command line
/etc/init.d/mysql stop


Step 2:
Open the my.cnf file
#vi /etc/mysql/my.cnf
--- change your new directory path
datadir=/usr/local/mysql/ #old - /var/lib/mysql/
Step 3:
Create a /home/mysql directory and give required permission to mysql users
mkdir /usr/local/mysql
chown -R mysql /usr/local/mysql
chgrp -R mysql /usr/local/mysql
Copy data and preserve permissions, etc.
cp -Rp /var/lib/mysql/* /usr/local/mysql
Step 4:
Start the mysql service
/etc/init.d/mysql start




Cheers!!!!!

MySQL:Replication Configuration

MySQL Replication Configuration


Configuring the Master Server
1.Edit the [mysqld] section of the /etc/my.cnf file on the master server to include a log-bin option. Also include a server-id=master_id option in this section, where master_id must be a positive integer value from 1 to 232-1. For example:
[mysqld]
log-bin=/usr/local/mysql/data/mysql-logbin-filename
binlog-do-db=dbname

server-id=1


NOTE: The server-id value configured in this step must be unique to each server in the replication configuration. No two slave servers or the master server may have the same server-id value




2.Restart the MySQL server on the master server, and make sure no errors occurred by entering the following command:
cd /usr/local/mysql/bin
./mysqladmin -uusername -ppassword shutdown
./mysqld_safe --user=mysql &


3.Start the MySQL command-line client by entering the following command:
cd /usr/local/mysql/bin
./mysql –uusername –ppassword
flush tables with read lock;


4.Run the following command in the MySQL client to verify that the master server is running correctly, and to gather the necessary information for configuring the slave servers:
mysql> show master status;


5.Create a replication user on the master server that has the privileges needed to connect and change mastership to the master server. The slave server uses this account to connect and configure the master/slave relationship.
mysql> create user 'anil'@'slave.test' identified by 'anil123';


6.Next, grant the necessary privileges to allow the MySQL slave servers to connect to the master server and establish replication:
mysql>grant replication slave on *.* to 'anil'@'192.168.0.160' identified by 'anil123';


7.While the database has the READ LOCK applied, export the data using mysqldump, including master data information:
mysqldump --all-databases --master-data > dbdump.db  OR Physical Backup(Raw Databackup)


3.Start the MySQL command-line client by entering the following command:
cd /usr/local/mysql/bin
./mysql –uusername –ppassword
unlock tables;


The master server is now ready for the slave servers’ connection.





Configuring the Slave Servers



1.Change the server-id parameter in the /etc/my.cnf file as follows:
mysqld]
server-id=2

2.Verify that the MySQL server on the slave server can be restarted correctly by entering the following command:


3.The following SQL commands are used to set the master server information on the slave server:


mysql> CHANGE MASTER TO
-> MASTER_HOST='master_host_name',
-> MASTER_USER='replication_user_name',
-> MASTER_PASSWORD='replication_password',
-> MASTER_LOG_FILE='recorded_log_file_name',
-> MASTER_LOG_POS=recorded_log_position;




4.For this example, enter the following commands:
mysql> CHANGE MASTER TO -> MASTER_HOST='master.test', -> MASTER_USER='osmusr', -> MASTER_PASSWORD='osmpass', -> MASTER_LOG_FILE='mysql-bin.000001', -> MASTER_LOG_POS=1678;
5.Start the slave threads, which initiate the master/slave server connection, by running the following command on the slave server:
mysql> start slave;
Run the show slave status command to verify that the slave server is connected to the master server correctly:
mysql> show slave status\G
The following output is displayed:


*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: master
Master_User: osmusr
Master_Port: 3306





Synchronizing Data
After you configure the master server and the slave server, perform the following steps to synchronize the database data.
1.Create the test database, osmsdb, and the test table, customer, on the master server by referring to the SQL scripts in “Creating the Sample Database for MySQL”.
2.Verify that the osmsdb database and customer table are created automatically on the slave server. To do this, enter the following commands from the MySQL command-line running on the slave servers:
mysql> show databases;
Verify that osmsdb is listed in the output.
mysql> use osmsdb;
mysql> show tables;

Verify that customeris listed as a table in the output.
3.Run some SQL commands that modify the customer table on the master server, and verify that these changes are reflected in the table on the slave server. For example, insert a record by entering the following command on the master server:
mysql> insert into customer values (1, ‘customer1’, \
‘2000-01-01 10:10:10’, ‘table’, 1234)

4.Verify that the SQL commands are executed automatically on the slave server.










MonitoringReplicationStatus
1.Run the following command on the master server to display the status:
mysql> show master status\G
The file name and position of the bin log file that the master server is using are displayed:
*************************** 1. row ***************************
File: mysql-bin.000003
Position: 98
Binlog_Do_DB:
Binlog_Ignore_DB:
1 row in set (0.00 sec)
2.Run the following command on the master server to get information about slave servers that are connecting to the master server:
mysql> show processlist\G
The following data is displayed:


*************************** 1. row ***************************
Id: 32
User: osmusr
Host: slave-1.test:32798
db: NULL
Command: Binlog Dump
Time: 5740
State: Has sent all binlog to slave; waiting for binlog to be updated
Info: NULL
*************************** 2. row ***************************
Id: 33
User: osmusr
Host: slave-2.test:32944
db: NULL
Command: Binlog Dump
Time: 2150
State: Has sent all binlog to slave; waiting for binlog to be updated
Info: NULL
*************************** 3. row ***************************
Id: 34
User: root
Host: localhost
db: NULL
Command: Query
Time: State: NULL
Info: show processlist
3 rows in set (0.00 sec)


The output shows that two slave servers, slave-1.test and slave-2.test, have connected to the master server, and the master server has sent all the binary log data to both slave servers and they are both up to date.
1.To show the status of the slave server, run the following command on the master server
mysql> show slave status\G
The master server information for this slave server and its status are displayed. If the slave server has processed all updates from the binary log and is up to date with the master server, the values of Master_Log_File andRead_Master_Log_Pos in the output are consistent with the output of show master status on the master server from step 1.
2.To show the current status of the slave server I/O thread, run the following command from the slave server:
mysql> show processlist\G


*************************** 1. row ***************************
Id: 4
User: system user
Host:
db: NULL
Command: Connect
Time: 173581
State: Waiting for master to send event
Info: NULL
*************************** 2. row ***************************
Id: 5
User: system user
Host:
db: NULL
Command: Connect
Time: 81990
State: Has read all relay log; waiting for the slave I/O thread to update it
Info: NULL
*************************** 3. row ***************************
Id: 225
User: root
Host: localhost
db: NULL
Command: Query
Time: 0
State: NULL
Info: show processlist
3 rows in set (0.00 sec)


The show processlist command on the slave server displays the current status of the slave I/O thread, which connects to the master server and writes events from the master server to relay logs. This command also shows the status of the slave server SQL thread, which reads events from these relay logs and enters them into the slave server database. The values of the State field in the output indicate that the slave server I/O thread and the slave server SQL thread have finished their tasks and are waiting for additional updates from the master server.

MySQL: Development & Features Quick Check!

MySQL Development History




- MySQL is pronounced as "My Ess Que Ell" OR "My SEQUEL"
- MySQL was first released internally on 23 May 1995
- Windows version was released on January 8, 1998 for Windows 95 and NT
- Version 3.23: beta from June 2000, production release January 2001
- Version 4.0: beta from August 2002, production release March 2003
- Version 4.1: beta from June 2004, production release October 2004
- Version 5.0: beta from March 2005, production release October 2005
- Version 5.1: currently pre-production (since November 2005)
- Sun Microsystems acquires MySQL AB on 26 February 2008




MySQL Features History




- Version 3.23.23 Full-Text Search
- Version 4.0 Full-Text Search (IN BOOLEAN MODE), UNIONS
- Version 4.1 R-Tree and B-Tree, Sub-Queries, Prepared Statements
- Version 5.0 Cursors, Stored Procedures, Triggers, Views, XA Transactions
- Version 5.1 Event Scheduler, Partitioning, Plugin API, Row-Based Replication, Server Log Tables




Cheers!!!!!

MYSQL: Enable BINARY Logging using my.cnf config

MYSQL: Enable BINARY Logging using my.cnf config

Below are the parameters to enable binary logs - Make sure the changes are done based on hardware configurations and RAM.

Mention binary logs with respect to databases using - "binlog_do_db" parameter


Binary file will automatically flush logs to new file base the system configuration settings and also we can flush logs by automating using shell script by setting cron-job feature.

If you want use MySQL options - MySQL Configuration has in-built to limit binary file size using - "max_binlog_size" parameter



# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
# other settings you may need to change.
#server-id = 1
log_bin = /usr/local/mysql/logfiles/XXXXXX_index.log
binlog_format=ROW
#log-slave-updates

#max_binlog_size = 100M
binlog_do_db = db1
binlog_do_db = db2
binlog_do_db = db3
binlog_do_db = db4
binlog_do_db = db5
#binlog_ignore_db = include_database_name


Cheers!!!!
MYSQL: slow queries log

MySQL has built-in functionality that allows you to log SQL queries to a file , You can enable the full SQL queries logs to a file or only slow running queries log. It is easy for us to troubleshoot/ debug the sql statement if SQL queries log enable , The slow query log is used to find queries that take a long time to execute and are therefore candidates for optimization.

To enable you just need to add some lines to your my.cnf file, and restart. Add the following:

* To enable slow Query Log only

log-slow-queries = /var/log/mysql/mysql-slow.log
long_query_time = 1

After enabling slow query, mysqld writes a statement to the slow query log file and it consists of all SQL statements that took more than long_query_time seconds to execute. The time to acquire the initial table locks is not counted as execution time. mysqld only log after SQL statements has been executed and after all locks have been released, so log order might be different from execution order. The minimum and default values of long_query_time are 1 and 10, respectively.

* To enable full Log Query

log=/var/log/mysqldquery.log

The above will log all queries to the log file.

Selecting Queries to Optmize
The slow query log
– Logs all queries that take longer than long_query_time
– Can also log all queries that don’t use indexes with
--log-queries-not-using-indexes
– To log slow administatve commands use
--log-slow-admin-statements
– To analyze the contents of the slow log use

MySQL:Information schema Tips

Query to list largest tables in the database
------------------------------------------------------------

mysql> SELECT concat(table_schema,'.',table_name) table_name,

concat(round(data_length/(1024*1024),2),'M') data_length

FROM information_schema.TABLES

ORDER BY data_length DESC;



Query to list all tables without primary key
---------------------------------------------------------

SELECT CONCAT(t.table_name,".",t.table_schema) as table_name

FROM information_schema.TABLES t

LEFT JOIN information_schema.TABLE_CONSTRAINTS tc

ON t.table_schema = tc.table_schema

AND t.table_name = tc.table_name

AND tc.constraint_type = 'PRIMARY KEY'

WHERE tc.constraint_name IS NULL

AND t.table_type = 'BASE TABLE';

Query to display tables with more than 1000 rows
--------------------------------------------------------------------

mysql> select concat(table_schema,'.',table_name) as table_name,table_rows

from information_schema.tables where table_rows > 1000

order by table_rows desc;





INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS shows the primary key-foreign key
----------------------------------------------------------------------------
SELECT table_name, referenced_table_name, update_rule, delete_rule
FROM information_schema.referential_constraints
WHERE constraint_schema='databasename'
ORDER BY table_name;




INFORMATION_SCHEMA.TABLE_CONSTRAINTS reveals the constraints defined in tables.
----------------------------------------------------------------------------------
SELECT table_name, constraint_type, constraint_name
FROM information_schema.table_constraints
WHERE table_schema='databasename'
ORDER BY table_name, constraint_type;


INFORMATION_SCHEMA.COLUMN_PRIVILEGES view to get this information.
------------------------------------------------------------------
SELECT table_name, column_name, grantee, privilege_type, is_grantable
FROM information_schema.column_privileges
WHERE table_schema='databasename'
ORDER BY table_name




SELECT table_name, grantee, privilege_type, is_grantable
FROM information_schema.table_privileges
WHERE table_schema='databasename'
ORDER BY table_name;


Cheers !!!

BACKup and REStore of MySQL Databases

Here is simple process which I follow to finish the task/work much easier way -


If you have to re-build your database from scratch, you can easily restore the mysqldump file by using the mysql command. This method is usually used to recreate or rebuild the database from scratch.


Here's how you would restore your custback.sql file to the Customers database.
mysql -u XXXX -p XXXX Customers < custback.sql


Easy isn't it ? Here's the general format you would follow:
mysql -u [username] -p [password] [database_to_restore] < [backupfile]


Now how about those zipped files? You can restore your zipped backup files by first uncompressing its contents and then sending it to mysql.
gunzip < custback.sql.sql.gz | mysql -u sadmin -p pass21 Customers


You can also combine two or more backup files to restore at the same time, using the cat command. Here's how you can do that.
cat backup1.sql backup.sql | mysql -u sadmin -p pass21


Moving Data Directly Between Databases
How would you like to replicate your present database to a new location? When you are shifting web hosts or database servers, you can directly copy data to the new database without having to create a database backup on your machine and restoring the same on the new server. mysql allows you to connect to a remote database server to run sql commands. Using this feature, we can pipe the output from mysqldump and ask mysql to connect to the remote database server to populate the new database. Let's say we want to recreate the Customers database on a new database server located at 202.32.12.32, we can run the following set of commands to replicate the present database at the new server.


mysqldump -u sadmin -p pass21 Customers | mysql --host=hostname -C Customers




cheers!!!!

MySQL root password recovery on LINUX

MySQL root password recovery:
=============================

1. As Linux system root user stop the database process: /etc/init.d/mysql stop
(or: service mysql stop)
2. Start MySQL in safe mode and skip the use of the "grant tables": /usr/bin/mysqld_safe --user=mysql --socket=/var/lib/mysql/mysql.sock --pid-file=/var/run/mysqld/mysqld.pid --datadir=/var/lib/mysql --skip-grant-tables --skip-networking &
3. Reset the MySQL root password: mysqladmin -u root flush-privileges password newpassword
4. Stop MySQL running in safe mode: kill `cat /var/run/mysqld/mysqld.pid`
5. Start MySQL: /etc/init.d/mysql start
6. The new MySQL root password can now be used: mysql -u root -p
Respond withthe password: newpassword

MySQL : Allocating memory for caches

Allocating memory for caches in MySQL

If the server is dedicated to MySQL, any memory you don’t reserve for the operating system or for query processing is available for caches.
MySQL needs more memory for caches than anything else. It uses caches to avoid disk access, which is orders of magnitude slower than accessing data in memory. The operating system may cache some data on MySQL’s behalf (especially for MyISAM),but MySQL needs lots of memory for itself too.


The following are the most important caches to consider for the majority of installations:
• The operating system caches for MyISAM data
• MyISAM key caches
• The InnoDB buffer pool
• The query cache
There are other caches, but they generally don’t use much memory.


It is much easier to tune a server if you’re using only one storage engine. If you’re using only MyISAM tables, you can disable InnoDB completely, and if you’re using only InnoDB, you need to allocate only minimal resources for MyISAM (MySQL uses MyISAM tables internally for some operations). But if you’re using a mixture of storage engines, it can be very hard to figure out the right balance between them.


The MyISAM Key Cache


The MyISAM key caches are also referred to as key buffers; there is one by default, but you can create more. Unlike InnoDB and some other storage engines, MyISAM itself caches only indexes, not data (it lets the operating system cache the data). If you use mostly MyISAM, you should allocate a lot of memory to the key caches.


The most important option is the key_buffer_size, which you should try setting to between 25% and 50% of the amount of memory you reserved for caches. The remainder will be available for the operating system caches, which the operating system will usually fill with data from MyISAM’s .MYD files.


MySQL 5.0 has a hard upper limit of 4 GB for this variable, no matter what architecture you’re running. (MySQL 5.1 allows larger sizes. Check the current documentation for your version of the server.)


By default MyISAM caches all indexes in the default key buffer, but you can create multiple named key buffers. This lets you keep more than 4 GB of indexes in memory at once. To create key buffers named key_buffer_1 and key_buffer_2, each sized at 1 GB, place the following in the configuration file:
key_buffer_1.key_buffer_size = 1G
key_buffer_2.key_buffer_size = 1G
Now there are three key buffers: the two explicitly created by those lines and the default buffer. You can use the CACHE INDEX command to map tables to caches. You can also tell MySQL to use key_buffer_1 for the indexes from tables t1 and t2 with the following SQL statement:
mysql> CACHE INDEX t1, t2 IN key_buffer_1;
Now when MySQL reads blocks from the indexes on these tables, it will cache the blocks in the specified buffer. You can also preload the tables’ indexes into the cache with the LOAD INDEX command:
mysql> LOAD INDEX INTO CACHE t1, t2;


You can place this SQL into a file that’s executed when MySQL starts up. The filename must be specified in the init_file option, and the file can include multiple SQL commands, each on a single line (no comments are allowed). Any indexes you don’t explicitly map to a key buffer will be assigned to the default buffer the first time MySQL needs to access the .MYI file.


You can monitor the performance and usage of the key buffers with information from SHOW STATUS and SHOW VARIABLES. You can calculate the hit ratio and the percentage of the buffer in use with these equations:
Cache hit ratio
100 - ( (Key_reads * 100) / Key_read_requests )
Percentage of buffer in use
100 - ( (Key_blocks_unused * key_cache_block_size) * 100 / key_buffer_size )


It’s good to know the cache hit rate, but this number can be misleading. For example,the difference between 99% and 99.9% looks small, but it really represents a tenfold increase. The cache hit rate is also application-dependent: some applications might work fine at 95%, whereas others might be I/O-bound at 99.9%. You might even be able to get a 99.99% hit rate with properly sized caches.


The number of cache misses per second is generally much more empirically useful. Suppose you have a single hard drive that can do 100 random reads per second. Five misses per second will not cause your workload to be I/O-bound, but 80 per second
will likely cause problems.
You can use the following equation to calculate this value cache misses:
Key_reads / Uptime
Calculate the number of misses incrementally over intervals of 10 to 100 seconds, so you can get an idea of the current performance. The following command will show the incremental values every 10 seconds:
$ mysqladmin extended-status -r -i 10 | grep Key_reads
When you’re deciding how much memory to allocate to the key caches, it might help to know how much space your MyISAM indexes are actually using on disk. You don’t need to make the key buffers larger than the data they will cache.
Space your MyISAM indexes are actually using on disk
$ du -sch `find /path/to/mysql/data/directory/ -name "*.MYI"`


Remember that MyISAM uses the operating system cache for the data files, which are often larger than the indexes. Therefore, it often makes sense to leave more memory for the operating system cache than for the key caches. Finally, even if you don’t have any MyISAM tables, bear in mind that you still need to set key_buffer_size to a small amount of memory, such as 32M. The MySQL server sometimes uses MyISAM tables for internal purposes, such as temporary tables for GROUP BY queries.


The MyISAM key block size The key block size is important (especially for write-intensive workloads) because of the way it causes MyISAM, the operating system cache, and the filesystem to interact. If the key block size is too small, you may encounter read-around writes, which
are writes that the operating system cannot perform without first reading some data from the disk. Here’s how a read-around write happens, assuming the operating system’s page size is 4 KB (typically true on the x86 architecture) and the key block size is 1 KB:
1. MyISAM requests a 1 KB key block from disk.
2. The operating system reads 4 KB of data from the disk and caches it, then passes the desired 1 KB of data to MyISAM.
3. The operating system discards the cached data in favor of some other data.
4. MyISAM modifies the 1 KB key block and asks the operating system to write it back to disk.
5. The operating system reads the same 4 KB of data from the disk into the operating system cache, modifies the 1 KB that MyISAM changed, and writes the entire
4 KB back to disk.


The read-around write happened in step 5, when MyISAM asked the operating system to write only part of a 4 KB page. If MyISAM’s block size had matched the operating system’s, the disk read in step 5 could have been avoided.*Unfortunately, in MySQL 5.0 and earlier, there’s no way to configure the key block size. However, in MySQL 5.1 and later, you can avoid read-around writes by making MyISAM’s key block size the same as the operating system’s. The myisam_block_size variable controls the key block size. You can also specify the size for each key with the KEY_BLOCK_SIZE option in a CREATE TABLE or CREATE INDEX statement, but because all keys are stored in the same file, you really need all of them to have blocks as large as or larger than the operating system’s to avoid alignment issues that could still cause read-around writes. (For example, if one key has 1 KB blocks and another has 4 KB blocks, the 4 KB block boundaries might not match the operating system’s page boundaries.)


The InnoDB Buffer Pool


If you use mostly InnoDB tables, the InnoDB buffer pool probably needs more memory than anything else. Unlike the MyISAM key cache, the InnoDB buffer pool doesn’t just cache indexes: it also holds row data, the adaptive hash index, the insert buffer, locks, and other internal structures. InnoDB also uses the buffer pool to help it delay writes, so it can merge many writes together and perform them sequentially. In short, InnoDB relies heavily on the buffer pool, and you should be sure to allocate enough memory to it.


The MySQL manual suggests using up to 80% of the machine’s physical memory for the buffer pool on a dedicated server; in reality, you can use more than that if the machine has a lot of memory.


As with the MyISAM key buffers, you can use variables from SHOW commands or tools such as innotop to monitor your InnoDB buffer pool’s memory usage and performance. There’s no equivalent of LOAD INDEX INTO CACHE for InnoDB tables. However, if you’re trying to warm up a server and get it ready to handle a heavy load, you can issue queries that perform full table scans or full index scans.


In most cases, you should make the InnoDB buffer pool as large as your available memory allows. However, in rare circumstances, very large buffer pools (say, 50 GB) can cause long stalls. For example, a large buffer pool may become slow during checkpoints or insert buffer merge operations, and concurrency can drop as a result of locking. If you experience these problems, you may have to reduce the buffer pool size.


You can change the innodb_max_dirty_pages_pct variable to instruct InnoDB to keep more or fewer dirty (modified) pages in the buffer pool. If you allow a lot of dirty pages, InnoDB can take a long time to shut down, because it writes the dirty pages to the data files upon shutdown.
You can monitor the number of dirty pages by watching the Innodb_buffer_pool_pages_dirty server status variable or using innotop to monitor SHOW INNODB STATUS.
Lowering the value of the innodb_max_dirty_pages_pct variable doesn’t actually guarantee that InnoDB will keep fewer dirty pages in the buffer pool. Instead, it controls the threshold at which InnoDB stops being “lazy.” InnoDB’s default behavior is to flush dirty pages with a background thread, merging writes together and performing them sequentially for efficiency. This behavior is called “lazy” because it lets InnoDB delay flushing dirty pages in the buffer pool, unless it needs to use the space for some other data. When the percentage of dirty pages exceeds the threshold, InnoDB will flush pages as quickly as it can to try to keep the dirty page count lower. The variable’s default value is 90, so by default InnoDB will flush lazily until the buffer pool is 90% full of dirty pages. You can tweak the threshold for your workload if you wish to spread out the writes a bit more. For example, lowering it to 50 will generally cause InnoDB to do more write operations, because it will flush pages sooner and therefore be unable to batch the writes as well. However, if your workload has a lot of write spikes, using a lower value may help InnoDB absorb the spikes better: it will have more “spare” memory to hold dirty pages, so it won’t have to wait for other dirty pages to be flushed to disk.


The Thread Cache


The thread cache holds threads that aren’t currently associated with a connection but are ready to serve new connections. When there’s a thread in the cache and a new connection is created, MySQL removes the thread from the cache and gives it to the new connection. When the connection is closed, MySQL places the thread back into the cache, if there’s room. If isn’t room, MySQL destroys the thread. As long as MySQL has a free thread in the cache, it can respond very rapidly to connect requests, because it doesn’t have to create a new thread for each connection.
The thread_cache_size variable specifies the number of threads MySQL can keep in the cache. You probably won’t need to tune this value, unless your server gets many connection requests. To check whether the thread cache is large enough, watch the Threads_created status variable. We generally try to keep the thread cache large enough that we see fewer than 10 new threads created each second, but it’s often pretty easy to get this number lower than 1 per second.


A good approach is to watch the Threads_connected variable and try to set thread_cache_size large enough to handle the typical fluctuation in your workload. For example, if Threads_connected usually stays between 100 and 200, you can set the cache size to 100. If it stays between 500 and 700, a thread cache of 200 should be large enough. Think of it this way: at 700 connections, there are probably no threads in the cache; at 500 connections, there are 200 cached threads ready to be used if the load increases to 700 again. Making the thread cache very large is probably not necessary for most uses, but keeping it small doesn’t save much memory, so there’s little benefit in doing so. Each thread that’s in the thread cache or sleeping typically uses around 256 KB of memory. This is very little compared to the amount of memory a thread can use when a connection is actively processing a query. In general, you should keep your thread cache large enough that Threads_created doesn’t increase very often. If this is a very large number, however (e.g., many thousand threads), you might want to set it lower because some operating systems don’t handle very large numbers of threads well, even when most of them are sleeping.


The Table Cache


The table cache is similar in concept to the thread cache, but it stores objects that represent tables. Each object in the cache contains the associated table’s parsed .frm file, plus other data. Exactly what else is in the object depends on the table’s storage engine. For example, for MyISAM, it holds the table data and/or index file descriptors. For merge tables it may hold many file descriptors, because merge tables can have many underlying tables. The table cache can help you reuse resources. For instance, when a query requests access to a MyISAM table, MySQL might be able to give it a file descriptor from the cached object instead of opening the file. The table cache can also help avoid some of the I/O required for marking a MyISAM table as “in use” in the index headers.*The table cache’s design is a little MyISAM-centric—this is one of the areas where the separation between the server and the storage engines is not completely clean, for historical reasons. The table cache is a little less important for InnoDB, because InnoDB doesn’t rely on it for as many purposes (such as holding file descriptors; it has its own version of a table cache for this purpose). However, even InnoDB benefits from caching the parsed .frm files.


In MySQL 5.1, the table cache is separated into two parts: a cache of open tables and a table definition cache (configured via the table_open_cache and table_definition_cache variables). Thus, the table definitions (the parsed .frm files) are separated from
the other resources, such as file descriptors. Opened tables are still per-thread, pertable-used, but the table definitions are global and can be shared among all connections efficiently. You can generally set table_definition_cache high enough to cache all your table definitions. Unless you have tens of thousands of tables, this is likely to be the easiest approach. If the Opened_tables status variable is large or increasing, the table cache isn’t large enough, and you should increase the table_cache system variable (or table_open_cache, in MySQL 5.1). The only real downside to making the table cache very large is that it might cause longer shutdown times when your server has a lot of MyISAM tables, because the key blocks have to be flushed and the tables have to be marked as no longer open. It can also make FLUSH TABLES WITH READ LOCK take a long time complete, for the same reason.
If you get errors indicating that MySQL can’t open any more files (use the perror utility to check what the error number means), you might also need to increase the number of files MySQL is allowed to keep open. You can do this with the open_files_limit server variable in your my.cnf file.
The thread and table caches don’t really use much memory, and they are beneficial
because they conserve resources. Although creating a new thread and opening a new
file aren’t really expensive compared to other things MySQL might do, the overhead
can add up quickly under a high-concurrency workload. Caching threads and tables
can improve efficiency.

Cheers!!!

Security Issues with MySQL ROOT Access


Security Issues with MySQL ROOT Access


MySQL offers simple but very effective security mechanisms. Unfortunately, the default installation of MySQL, and in particular the empty root password and the potential vulnerability to buffer overflow attacks, makes the database an easy target for attacks.
In order to achieve the highest possible level of security, the installation and configuration of MySQL should be performed in accordance with the following security requirements:


* MySQL processes must run under a unique UID/GID that is not used by any other system process.
* Only local access to MySQL need to be allowed.(some exceptions for jobs/backups)
* MySQL root's account must be protected by a complex/hard to guess password.
* The administrator's account (root) need to be renamed.
* Anonymous access to the database (by using the nobody account) must be disabled.


MySQL Security risks can be categorized into the following.


* Filesystem security risks. MySQL Installation (basedir) and database information (datadir) and other log (querylog/slowlog) files that contain information about queries that clients execute. These files/directories need to be protected so that other users who have login accounts on the server host cannot access them directly.
* Network security risks. The MySQL server provides access to databases by allowing clients to connect and make requests. Information about client accounts is stored in the mysql database. Each account should be set up with privileges that provide access only to the data the accounts needs to see or modify.


MySQL root account has full privileges to perform any database operation, so it's important to assign the account a password that is not easily guessed. Note that usernames and passwords for MySQL accounts are unrelated to those for system login accounts. OS login and MySQL login both should not be the same.


Restrict anonymous remote access


Grant access to specific users from specific hosts only. Do not grant access from all hosts.
Do not grant the PROCESS or SUPER privilege to non-administrative users. The output of mysqladmin processlist and SHOW PROCESSLIST shows the statements currently being executed, so any user who is allowed to see the server process list might be able to see statements issued by other users such as UPDATE user SET password=PASSWORD(pwd).


mysqld reserves an extra connection for users who have the SUPER privilege, so that a MySQL root user can log in and check server activity even if all normal connections are in use. This is very useful when MySQL reaches the max_connections threshold. This cannot be beneficial; if users root (which ever has SUPER privilege) is used for all client connections.
Do not grant the FILE privilege to non-administrative users. Any user that has this privilege can write a file anywhere in the file system with the privileges of the mysqld daemon. To make this a bit safer, files generated with SELECT ... INTO OUTFILE do not overwrite existing files and are writable by everyone.
The FILE privilege may also be used to read any file that is accessible to the Unix user that the server runs as. With this privilege, user can read any file into a database table. This could be abused, for example, by using LOAD DATA to load /etc/passwd into a table, which then can be displayed with SELECT.


Improve local security


Use different socket file for both client and server connections. The following parameter should be changed in the [client] section of /etc/my.cnf:
[client] socket = /tmp/mysql.sock

Change admin password



One of the most important steps in securing MySQL is changing the database administrator's password, which is empty by default. In order to change the administrator's password, follow the steps:
mysql -u root mysql> SET PASSWORD FOR root@localhost=PASSWORD('new_password');
It is good practice not to change passwords from the command line,(instead change at database level). This is especially important when other users working on the server. In that case the password could be easily revealed, e.g. by using the "ps aux" command or reviewing history files, when improper access rights are set to them.

Change admin name



It is also recommended to change the default name of administrator's account (root), to a different, harder to guess one. Such a change will make it difficult to perform brute-force and dictionary attacks on the administrator's password.
mysql> update user set user="mysqluser" where user="root"; mysql> flush privileges;

Remove history



We should also remove the content of the MySQL history file (~/.mysql_history, ~/.history, ~/.bash_history,~/.mysql_history) in which all executed SQL commands are being stored (especially passwords, which are stored as plain text).
User Access Privileges
We can create accounts for specific databases which will be used by specific applications. These accounts should have access rights only to the databases which are used by the specific applications. In particular, they should not have any access rights to the mysql database, nor any system or administrative privileges (FILE, GRANT, ALTER, SHOW DATABASE, RELOAD, SHUTDOWN, PROCESS, SUPER etc.). Application users should not granted all privileges to database with Grant option from any host.



mysql> select user,host,password from user; GRANT USAGE ON *.* TO 'user1'@'%' IDENTIFIED BY PASSWORD 'xxxxxx' | GRANT ALL PRIVILEGES ON `user1`.* TO 'glist'@'%' WITH GRANT OPTION




cheers!!!!