Important mysql quries

Taking backup of all mysql databases

>mysqldump -uroot -p --all-databases >> all_database_dump.sql;

for single database

mysqldump -uroot -p public_wordpressmu wp_1_posts > one.sql

database import from sql file

mysql –verbose -uroot -p public_wordpressmu wp_1_posts < one.sql

Creating Trigger

create trigger
DROP TRIGGER wordpress.after_wp_1_post_update;

delimiter //

CREATE TRIGGER wordpressmu.after_wp_1_post_update

AFTER UPDATE ON wordpressmu.wp_1_posts

FOR EACH ROW

IF new.post_status = ‘publish’ THEN

INSERT INTO .article_posts (id,ref_id, title, status,
comments_on, created_at, updated_at) values (new.id,new.id,
new.post_title, new.post_status, 1, new.post_date, new.post_date);

END IF;

//

delimiter ;

Insert from different table (migration)

INSERT INTO database_anme.TableName(id, ref_id, title, status, comments_on, created_at, updated_at) select id, id, post_title, post_status, 1, post_date,post_date from wordpress.wp_1_posts

Update multiple rows in through Sql query

Update user set status=”ok” where id in (2 ,3 ,4 ,5,6)

changing all post path and image path
UPDATE wp_1_posts SET post_content = REPLACE(post_content, ‘test.purab.com’, ‘www.purab.com’);

WordPress -fetching category through sql queryfrom post

SELECT wp_1_term_taxonomy.taxonomy, wp_1_term_relationships.object_id, wp_1_terms.name, wp_1_posts.post_title
FROM wp_1_posts INNER JOIN ((wp_1_terms INNER JOIN wp_1_term_taxonomy ON wp_1_terms.term_id = wp_1_term_taxonomy.term_id) INNER JOIN wp_1_term_relationships ON wp_1_term_taxonomy.term_taxonomy_id = wp_1_term_relationships.term_taxonomy_id) ON wp_1_posts.ID = wp_1_term_relationships.object_id
WHERE wp_1_term_taxonomy.taxonomy=’category’ and wp_1_term_relationships.object_id=’217′
ORDER BY wp_1_term_relationships.object_id ;

Advertisements

Finding out largest tables on MySQL Server

Finding largest tables on MySQL instance is no brainier in MySQL 5.0+ thanks to Information Schema but I still wanted to post little query I use for the purpose so I can easily find it later, plus it is quite handy in a way it presents information:

SQL:

  1. mysql> SELECT concat(table_schema,‘.’,table_name),concat(round(table_rows/1000000,2),‘M’) rows,concat(round(data_length/(1024*1024*1024),2),‘G’) DATA,concat(round(index_length/(1024*1024*1024),2),‘G’) idx,concat(round((data_length+index_length)/(1024*1024*1024),2),‘G’) total_size,round(index_length/data_length,2) idxfrac FROM information_schema.TABLES ORDER BY data_length+index_length DESC LIMIT 10;
  2. +————————————-+——–+——–+——–+————+———+
  3. | concat(table_schema,‘.’,table_name) | rows | DATA | idx | total_size | idxfrac |
  4. +————————————-+——–+——–+——–+————+———+
  5. | art87.link_out87 | 37.25M | 14.83G | 14.17G | 29.00G | 0.96 |
  6. | art87.article87 | 12.67M | 15.83G | 4.79G | 20.62G | 0.30 |
  7. | art116.article116 | 10.49M | 12.52G | 3.65G | 16.18G | 0.29 |
  8. | art84.article84 | 10.10M | 10.11G | 3.59G | 13.70G | 0.35 |
  9. | art104.link_out104 | 23.66M | 6.63G | 6.55G | 13.18G | 0.99 |
  10. | art118.article118 | 7.06M | 10.49G | 2.68G | 13.17G | 0.26 |
  11. | art106.article106 | 9.86M | 10.19G | 2.76G | 12.95G | 0.27 |
  12. | art85.article85 | 6.20M | 9.82G | 2.51G | 12.33G | 0.26 |
  13. | art91.article91 | 8.66M | 9.17G | 2.66G | 11.83G | 0.29 |
  14. | art94.article94 | 5.21M | 10.10G | 1.69G | 11.79G | 0.17 |
  15. +————————————-+——–+——–+——–+————+———+
  16. 10 rows IN SET (2 min 29.19 sec)

I do some converting and rounding to see number of rows in millions and data and index size in GB so I can save on counting zeros.
The last column shows how much does the index take compared to the data which is mainly for informational purposes but for MyISAM can also help you to size your key buffer compared to operating system cache.

I also use it to see which tables may be worth to review in terms of indexes. Large index size compared to data size often indicates there is a lot of indexes (so it is well possible there are some duplicates, redundant or simply unused indexes among them) or may be there is long primary key with Innodb tables. Of course it also could be perfectly fine tables but it is worth to look.

Changing the query a bit to look for different sorting order or extra data – such as average row length you can learn quite a lot about your schema this way.

It is also worth to note queries on information_schema can be rather slow if you have a lot of large tables. On this instance it took 2.5 minutes to run for 450 tables.

UPDATE: To make things easier I’ve added INFORMATION_SCHEMA to the query so it works whatever database you have active. It does not work with MySQL before 5.0 still of course