使用PTONLNE改MYSQL分区表

2021/9/6 19:07:32

本文主要是介绍使用PTONLNE改MYSQL分区表,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!

上篇使用ALTER 来修改分区,对数据量很大的表会造成长时间的METEDATA LOCK 。因为COPY TO TMP TALBE 过程中 加锁了。其实这是没有必要的! ONLINE DDL 还没支持改分区表。

有些业务没有停机维修的时间窗口,则需要用PTONLINE工具

https://www.percona.com/downloads/percona-toolkit/LATEST/

##01 安装在CENOTS7上

tar -zxvf  .....
yum -y install perl-CPAN
yum -y  install perl-Time-HiRes


 

##该工具需要一些依赖包,直接执行不成功时一般会有提示,这里可以提前yum安装
yum install perl-DBI
yum install perl-DBD-MySQL
yum install perl-Time-HiRes
yum install perl-IO-Socket-SSL

##011 编译安装 这里用到了PERL脚本语言

perl Makefile.PL
make
make install## 02 SQL  用ONLINE DDL 方式修改表主键和唯一索引

## 0201    
ALTER TABLE `dba_test_big_table_source`.`merchant_account_history` DROP INDEX flow_no_idx;

## 0202 创建为普通索引
CREATE INDEX `flow_no_idx` USING BTREE ON `dba_test_big_table_source`.`dba_unique_test` (`flow_no`);

## 0203 修改表主键把分区字段添加到主键列表中
ALTER TABLE `dba_test_big_table_source`.`dba_unique_test`  DROP PRIMARY KEY, ADD PRIMARY KEY(`id`,`create_time`) ;


##03 VIM pt_online.sh 注意时间的双引号

clear
pt-online-schema-change --print --statistics \
    --progress time,30 --preserve-triggers \
    --max-load=threads_running=100,threads_connected=200 \
    --critical-load=threads_running=1000  \
    --chunk-size=1000 --charset=utf8 --no-version-check \
    --alter-foreign-keys-method auto \
    --user=root  --password=Test^2021  h=192.168.0.227,P=3306,D=dba_test_big_table_source,t=dba_unique_test \
   --alter '
partition by range(to_days(`create_time`))
(
 PARTITION create_time_20210902 VALUES LESS THAN (to_days("2021-09-02")),
 PARTITION create_time_DEFAULTE VALUES LESS THAN MAXVALUE 
);'\
  --dry-run
 ##--execute

    
###031MEMO
--print:打印工具执行的SQL语句。
--statistics:打印统计信息。
--pause-file:当指定的文件存在时,终止执行。
--max-load:超过指定负载时,暂定执行   默认为Threads_running=25
--critical-load:超过指定负载时,终止执行
--chunck-size:指定每次复制的行数
--alter-foreign-keys-method:指定外键更新方式
--progress:copy进度打印的频率
--charset=utf8 使用utf8编码,避免中文乱码
--no-version-check 不检查版本,在阿里云服务器中一般加入此参数,否则会报错
--dry-run 不真实运行,只是检查语法
--execute 真实执行

##涉及从库的参数
--max-lag    默认1s。每个chunk拷贝完成后,会查看所有复制Slave的延迟情况。要是延迟大于该值,则暂停复制数据,直到所有从的滞后小于这个值,使用Seconds_Behind_Master。如果有任何从滞后超过此选项的值,则该工具将睡眠--check-interval指定的时间,再检查。如果从被停止,将会永远等待,直到从开始同步,并且延迟小于该值。如果指定--check-slave-lag,该工具只检查该服务器的延迟,而不是所有服务器。
--check-slave-lag 指定一个从库的DSN连接地址,如果从库超过--max-lag参数设置的值,就会暂停操作。  
--recursion-method    默认是show processlist,发现从的方法,也可以是host,但需要在从上指定report_host,通过show slave hosts来找到,可以指定none来不检查Slave。 
--check-interval   默认是1。--max-lag检查的睡眠时间。 
--[no]check-replication-filters    默认yes。如果工具检测到服务器选项中有任何复制相关的筛选,如指定binlog_ignore_db和replicate_do_db此类。发现有这样的筛选,工具会报错且退出。因为如果更新的表Master上存在,而Slave上不存在,会导致复制的失败。使用–no-check-replication-filters选项来禁用该检查
 

##04 脚本运行过程 

[root@CENTOS7GUI ~]# sh pt_online.sh 
No slaves found.  See --recursion-method if host localhost.localdomain has slaves.
Not checking slave lag because no slaves were found and --check-slave-lag was not specified.
Operation, tries, wait:
  analyze_table, 10, 1
  copy_rows, 10, 0.25
  create_triggers, 10, 1
  drop_triggers, 10, 1
  swap_tables, 10, 1
  update_foreign_keys, 10, 1
No foreign keys reference `dba_test_big_table_source`.`dba_unique_test`; ignoring --alter-foreign-keys-method.
Altering `dba_test_big_table_source`.`dba_unique_test`...
Creating new table...
CREATE TABLE `dba_test_big_table_source`.`_dba_unique_test_new` (
  `Id` bigint(20) NOT NULL AUTO_INCREMENT,
  `amount` decimal(20,2) NOT NULL,
  `status` smallint(3) NOT NULL,
  `flow_no` varchar(33) NOT NULL COMMENT '流水号',
  `create_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
  `update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT '更新时间',
  PRIMARY KEY (`Id`,`create_time`),
  UNIQUE KEY `idx_id_time` (`Id`,`create_time`) USING BTREE,
  UNIQUE KEY `idx_flow_time` (`flow_no`,`create_time`) USING BTREE,
  KEY `idx_btree_flow` (`flow_no`) USING BTREE
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8 COMMENT='用户流水表'
Created new table dba_test_big_table_source._dba_unique_test_new OK.

Altering new table...
ALTER TABLE `dba_test_big_table_source`.`_dba_unique_test_new` 
partition by range(to_days(`create_time`))
(
 PARTITION create_time_20210902 VALUES LESS THAN (to_days("2021-09-02")),
 PARTITION create_time_DEFAULTE VALUES LESS THAN MAXVALUE 
);
Altered `dba_test_big_table_source`.`_dba_unique_test_new` OK.


2021-09-06T15:47:19 Creating triggers...
-----------------------------------------------------------
Event : DELETE 
Name  : pt_osc_dba_test_big_table_source_merchant_account_history_del 
SQL   : CREATE TRIGGER `pt_osc_dba_test_big_table_source_dba_unique_test_del` AFTER DELETE ON `dba_test_big_table_source`.`dba_unique_test` FOR EACH ROW BEGIN DECLARE CONTINUE HANDLER FOR 1146 begin end; DELETE IGNORE FROM `dba_test_big_table_source`.`_dba_unique_test_new` WHERE `dba_test_big_table_source`.`_dba_unique_test_new`.`id` <=> OLD.`id` AND `dba_test_big_table_source`.`_dba_unique_test_new`.`create_time` <=> OLD.`create_time`; END  
Suffix: del 
Time  : AFTER 
-----------------------------------------------------------
-----------------------------------------------------------
Event : UPDATE 
Name  : pt_osc_dba_test_big_table_source_merchant_account_history_upd 
SQL   : CREATE TRIGGER `pt_osc_dba_test_big_table_source_dba_unique_test_upd` AFTER UPDATE ON `dba_test_big_table_source`.`dba_unique_test` FOR EACH ROW BEGIN DECLARE CONTINUE HANDLER FOR 1146 begin end; DELETE IGNORE FROM `dba_test_big_table_source`.`_dba_unique_test_new` WHERE !(OLD.`id` <=> NEW.`id` AND OLD.`create_time` <=> NEW.`create_time`) AND `dba_test_big_table_source`.`_dba_unique_test_new`.`id` <=> OLD.`id` AND `dba_test_big_table_source`.`_dba_unique_test_new`.`create_time` <=> OLD.`create_time`; REPLACE INTO `dba_test_big_table_source`.`_dba_unique_test_new` (`id`, `merchant_account_id`, `merchant_no`, `amount`, `type`, `flow_no`, `operator_id`, `create_time`, `last_update_time`, `before_amount`, `after_amount`, `remark`) VALUES (NEW.`id`, NEW.`merchant_account_id`, NEW.`merchant_no`, NEW.`amount`, NEW.`type`, NEW.`flow_no`, NEW.`operator_id`, NEW.`create_time`, NEW.`last_update_time`, NEW.`before_amount`, NEW.`after_amount`, NEW.`remark`); END  
Suffix: upd 
Time  : AFTER 
-----------------------------------------------------------
-----------------------------------------------------------
Event : INSERT 
Name  : pt_osc_dba_test_big_table_source_merchant_account_history_ins 
SQL   : CREATE TRIGGER `pt_osc_dba_test_big_table_source_dba_unique_test_ins` AFTER INSERT ON `dba_test_big_table_source`.`dba_unique_test` FOR EACH ROW BEGIN DECLARE CONTINUE HANDLER FOR 1146 begin end; REPLACE INTO `dba_test_big_table_source`.`_dba_unique_test_new` (`id`, `merchant_account_id`, `merchant_no`, `amount`, `type`, `flow_no`, `operator_id`, `create_time`, `last_update_time`, `before_amount`, `after_amount`, `remark`) VALUES (NEW.`id`, NEW.`merchant_account_id`, NEW.`merchant_no`, NEW.`amount`, NEW.`type`, NEW.`flow_no`, NEW.`operator_id`, NEW.`create_time`, NEW.`last_update_time`, NEW.`before_amount`, NEW.`after_amount`, NEW.`remark`);END  
Suffix: ins 
Time  : AFTER 
-----------------------------------------------------------
2021-09-06T15:47:19 Created triggers OK.


2021-09-06T15:47:19 Copying approximately 997472 rows...
INSERT LOW_PRIORITY IGNORE INTO `dba_test_big_table_source`.`_dba_unique_test_new` (`id`, `merchant_account_id`, `merchant_no`, `amount`, `type`, `flow_no`, `operator_id`, `create_time`, `last_update_time`, `before_amount`, `after_amount`, `remark`) SELECT `id`, `merchant_account_id`, `merchant_no`, `amount`, `type`, `flow_no`, `operator_id`, `create_time`, `last_update_time`, `before_amount`, `after_amount`, `remark` FROM `dba_test_big_table_source`.`dba_unique_test` FORCE INDEX(`PRIMARY`) WHERE ((`id` > ?) OR (`id` = ? AND `create_time` >= ?)) AND ((`id` < ?) OR (`id` = ? AND `create_time` <= ?)) LOCK IN SHARE MODE /*pt-online-schema-change 21612 copy nibble*/
SELECT /*!40001 SQL_NO_CACHE */ `id`, `id`, `create_time` FROM `dba_test_big_table_source`.`dba_unique_test` FORCE INDEX(`PRIMARY`) WHERE ((`id` > ?) OR (`id` = ? AND `create_time` >= ?)) ORDER BY `id`, `create_time` LIMIT ?, 2 /*next chunk boundary*/
Copying `dba_test_big_table_source`.`merchant_account_history`:  73% 00:10 remain
2021-09-06T15:48:00 Copied rows OK.

2021-09-06T15:48:00 Adding original triggers to new table.
2021-09-06T15:48:00 Analyzing new table...

2021-09-06T15:48:00 Swapping tables...
RENAME TABLE `dba_test_big_table_source`.`merchant_account_history` TO `dba_test_big_table_source`.`_merchant_account_history_old`, `dba_test_big_table_source`.`_merchant_account_history_new` TO `dba_test_big_table_source`.`merchant_account_history`
2021-09-06T15:48:00 Swapped original and new tables OK.

2021-09-06T15:48:00 Dropping old table...
DROP TABLE IF EXISTS `dba_test_big_table_source`.`_dba_unique_test_old`
2021-09-06T15:48:00 Dropped old table `dba_test_big_table_source`.`_dba_unique_test_old` OK.

2021-09-06T15:48:00 Dropping triggers...
DROP TRIGGER IF EXISTS `dba_test_big_table_source`.`pt_osc_dba_test_big_table_source_merchant_account_history_del`
DROP TRIGGER IF EXISTS `dba_test_big_table_source`.`pt_osc_dba_test_big_table_source_merchant_account_history_upd`
DROP TRIGGER IF EXISTS `dba_test_big_table_source`.`pt_osc_dba_test_big_table_source_merchant_account_history_ins`
2021-09-06T15:48:00 Dropped triggers OK.
# Event  Count
# ====== =====
# INSERT  1002
Successfully altered `dba_test_big_table_source`.`dba_unique_test`.
 



这篇关于使用PTONLNE改MYSQL分区表的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!


扫一扫关注最新编程教程