失眠网,内容丰富有趣,生活中的好帮手!
失眠网 > greenplum(gp)postgre清理pg_xlog释放空间

greenplum(gp)postgre清理pg_xlog释放空间

时间:2019-07-01 08:44:34

相关推荐

greenplum(gp)postgre清理pg_xlog释放空间

greenplum(gp)/postgre清理pg_xlog释放空间

​ greenplum数据库在大量数据更新时,会产生大量的pg_xlog,造成磁盘空间占用;系统长时间运行后,pg_log文件夹下日志文件也会越来越大,合理的做法是日志文件的磁盘和数据目录挂载在不同的磁盘路径下或者磁盘扩容;若在同一路径下,可将日志文件备份到其他路径,腾出一部分空间,或清理pg_xlog

【注意】:

pg_xlog下面的日志不能人为的随意删除,例如使用rm -rf命令,否则给数据库带来巨大灾难,服务启动失败greenplum/PostgreSQL自带了清理wal日志的工具叫pg_resetxlog,其实是通过重置xlog的值来清理xlog文件。但是这个工具通常不建议使用,在无法扩容且磁盘快满等特殊情况下,可以选择清理,但也会有一定几率造成服务启动异常。一定不能在服务运行的情况下去操作pg_resetxlog

查看当前pg_xlog占用大小【gpmaster操作】

[gpadmin@gpmaster ~]$ cd /home/gpadmin/gpdata/gpmaster/gpseg-1/[gpadmin@gpmaster gpseg-1]$ du -sh *99Mbase4.1Mglobal20Kgpperfmon4.0Kgpsegconfig_dump4.0Kgpssh.conf4.0Kinternal.auto.conf36Kpg_clog36Kpg_distributedlog4.0Kpg_dynshmem8.0Kpg_hba.conf4.0Kpg_ident.conf28Kpg_log12Kpg_logical76Kpg_multixact36Kpg_notify4.0Kpg_replslot4.0Kpg_serial4.0Kpg_snapshots4.0Kpg_stat4.0Kpg_stat_tmp36Kpg_subtrans4.0Kpg_tblspc4.0Kpg_twophase4.0Kpg_utilitymodedtmredo4.0KPG_VERSION193Mpg_xlog4.0Kpostgresql.auto.conf24Kpostgresql.conf4.0Kpostmaster.opts4.0Kpostmaster.pid

可以看到当前数据目录下pg_xlog 193M大小

实际上集群的每个数据存储节点上都会存在pg_xlog的空间占用

通过gpstate命令查看所有的数据存储路径【gpmaster操作】

[gpadmin@gpmaster gpdata]$ gpstate -s | grep -E "Master data directory|Datadir"|awk -F"=" '{print $2}'/home/gpadmin/gpdata/gpmaster/gpseg-1/home/gpadmin/gpdata/gpdatap1/gpseg0/home/gpadmin/gpdata/gpdatap2/gpseg1/home/gpadmin/gpdata/gpdatap1/gpseg2/home/gpadmin/gpdata/gpdatap2/gpseg3/home/gpadmin/gpdata/gpdatap1/gpseg4/home/gpadmin/gpdata/gpdatap2/gpseg5

这里获取到的路径清单即为集群所有节点的数据存储目录,其中gpstandby路径和gpmaster一样

停止gp集群服务【gpmaster操作】

[gpadmin@gpmaster gpdata]$ gpstop0630:16:19:37:005122 gpstop:gpmaster:gpadmin-[INFO]:---------------------------------------------0630:16:19:37:005122 gpstop:gpmaster:gpadmin-[INFO]:-Segment instances that will be shutdown:0630:16:19:37:005122 gpstop:gpmaster:gpadmin-[INFO]:---------------------------------------------0630:16:19:37:005122 gpstop:gpmaster:gpadmin-[INFO]:- Host Datadir Port Status0630:16:19:37:005122 gpstop:gpmaster:gpadmin-[INFO]:- segment01 /home/gpadmin/gpdata/gpdatap1/gpseg0 6000 u0630:16:19:37:005122 gpstop:gpmaster:gpadmin-[INFO]:- segment01 /home/gpadmin/gpdata/gpdatap2/gpseg1 6001 u0630:16:19:37:005122 gpstop:gpmaster:gpadmin-[INFO]:- segment02 /home/gpadmin/gpdata/gpdatap1/gpseg2 6000 u0630:16:19:37:005122 gpstop:gpmaster:gpadmin-[INFO]:- segment02 /home/gpadmin/gpdata/gpdatap2/gpseg3 6001 u0630:16:19:37:005122 gpstop:gpmaster:gpadmin-[INFO]:- segment03 /home/gpadmin/gpdata/gpdatap1/gpseg4 6000 u0630:16:19:37:005122 gpstop:gpmaster:gpadmin-[INFO]:- segment03 /home/gpadmin/gpdata/gpdatap2/gpseg5 6001 uContinue with Greenplum instance shutdown Yy|Nn (default=N):> y0630:16:19:39:005122 gpstop:gpmaster:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='smart'0630:16:19:39:005122 gpstop:gpmaster:gpadmin-[INFO]:-Master segment instance directory=/home/gpadmin/gpdata/gpmaster/gpseg-10630:16:19:39:005122 gpstop:gpmaster:gpadmin-[INFO]:-Stopping master segment and waiting for user connections to finish ...server shutting down0630:16:19:40:005122 gpstop:gpmaster:gpadmin-[INFO]:-Cleaning up leftover gpsmon processes0630:16:19:40:005122 gpstop:gpmaster:gpadmin-[INFO]:-No leftover gpsmon processes on some hosts. not attempting forceful termination on these hosts0630:16:19:40:005122 gpstop:gpmaster:gpadmin-[INFO]:-Cleaning up leftover shared memory[gpadmin@gpmaster gpdata]$

清理流程【gpmaster操作】

# 注意操作用户均为gpadmin管理员用户来操作,其他linux普通用户无法操作# pg_controldata命令读取的pg_control里存储路径的元数据信息[gpadmin@gpmaster gpseg-1]$ pg_controldata /home/gpadmin/gpdata/gpmaster/gpseg-1 | grep -E "Latest checkpoint's NextXID|Latest checkpoint's NextOID"Latest checkpoint's NextXID:0/735Latest checkpoint's NextOID:16395# -o 参数为查询到的NextOID,-x 参数为查询到的NextXID,-f 参数为对应的数据存储目录[gpadmin@gpmaster gpseg-1]$ pg_resetxlog -o 16395 -x 735 -f /home/gpadmin/gpdata/gpmaster/gpseg-1WARNING: Do not use this on Greenplum. pg_resetxlog might cause data lossand render system irrecoverable. Do you wish to proceed? [yes/no] yesTransaction log reset[gpadmin@gpmaster gpseg-1]$

再次验证数据存储目录【gpmaster操作】

[gpadmin@gpmaster gpseg-1]$ du -sh *99Mbase4.1Mglobal20Kgpperfmon4.0Kgpsegconfig_dump4.0Kgpssh.conf4.0Kinternal.auto.conf36Kpg_clog36Kpg_distributedlog4.0Kpg_dynshmem8.0Kpg_hba.conf4.0Kpg_ident.conf36Kpg_log12Kpg_logical76Kpg_multixact36Kpg_notify4.0Kpg_replslot4.0Kpg_serial4.0Kpg_snapshots52Kpg_stat4.0Kpg_stat_tmp36Kpg_subtrans4.0Kpg_tblspc4.0Kpg_twophase4.0Kpg_utilitymodedtmredo4.0KPG_VERSION65Mpg_xlog4.0Kpostgresql.auto.conf24Kpostgresql.conf4.0Kpostmaster.opts

可以看到当前数据目录下pg_xlog 65M大小,已经被清理

同样操作方式逐个节点实施清理【根据机器清单逐个执行】

【注意】:

涉及的数据路径均以真实环境查询到的数据存储路径清单为准NextXID和NextOID均以真实环境查询到的结果为准

# 【gpstandby操作】[gpadmin@gpstandby gpdata]$ pg_controldata /home/gpadmin/gpdata/gpmaster/gpseg-1 | grep -E "Latest checkpoint's NextXID|Latest checkpoint's NextOID"Latest checkpoint's NextXID:0/735Latest checkpoint's NextOID:16395[gpadmin@gpstandby gpdata]$ [gpadmin@gpstandby gpdata]$ pg_resetxlog -o 16395 -x 735 -f /home/gpadmin/gpdata/gpmaster/gpseg-1WARNING: Do not use this on Greenplum. pg_resetxlog might cause data lossand render system irrecoverable. Do you wish to proceed? [yes/no] yesTransaction log reset[gpadmin@gpstandby gpdata]$ # 【segment01操作】[gpadmin@segment01 gpseg0]$ pg_controldata /home/gpadmin/gpdata/gpdatap1/gpseg0 | grep -E "Latest checkpoint's NextXID|Latest checkpoint's NextOID"Latest checkpoint's NextXID:0/726Latest checkpoint's NextOID:16392[gpadmin@segment01 gpseg0]$ pg_controldata /home/gpadmin/gpdata/gpdatap2/gpseg1 | grep -E "Latest checkpoint's NextXID|Latest checkpoint's NextOID"Latest checkpoint's NextXID:0/726Latest checkpoint's NextOID:16392[gpadmin@segment01 gpseg0]$ [gpadmin@segment01 gpseg0]$ [gpadmin@segment01 gpseg0]$ pg_resetxlog -o 16392 -x 726 -f /home/gpadmin/gpdata/gpdatap1/gpseg0WARNING: Do not use this on Greenplum. pg_resetxlog might cause data lossand render system irrecoverable. Do you wish to proceed? [yes/no] yesTransaction log reset[gpadmin@segment01 gpseg0]$ pg_resetxlog -o 16392 -x 726 -f /home/gpadmin/gpdata/gpdatap2/gpseg1WARNING: Do not use this on Greenplum. pg_resetxlog might cause data lossand render system irrecoverable. Do you wish to proceed? [yes/no] yesTransaction log reset# 【segment02操作】[gpadmin@segment02 gpdata]$ pg_controldata /home/gpadmin/gpdata/gpdatap1/gpseg2 | grep -E "Latest checkpoint's NextXID|Latest checkpoint's NextOID"Latest checkpoint's NextXID:0/726Latest checkpoint's NextOID:16392[gpadmin@segment02 gpdata]$ pg_controldata /home/gpadmin/gpdata/gpdatap2/gpseg3 | grep -E "Latest checkpoint's NextXID|Latest checkpoint's NextOID"Latest checkpoint's NextXID:0/726Latest checkpoint's NextOID:16392[gpadmin@segment02 gpdata]$ [gpadmin@segment02 gpdata]$ pg_resetxlog -o 16392 -x 726 -f /home/gpadmin/gpdata/gpdatap1/gpseg2WARNING: Do not use this on Greenplum. pg_resetxlog might cause data lossand render system irrecoverable. Do you wish to proceed? [yes/no] yesTransaction log reset[gpadmin@segment02 gpdata]$ pg_resetxlog -o 16392 -x 726 -f /home/gpadmin/gpdata/gpdatap2/gpseg3WARNING: Do not use this on Greenplum. pg_resetxlog might cause data lossand render system irrecoverable. Do you wish to proceed? [yes/no] yesTransaction log reset# 【segment03操作】[gpadmin@segment03 ~]$ pg_controldata /home/gpadmin/gpdata/gpdatap1/gpseg4 | grep -E "Latest checkpoint's NextXID|Latest checkpoint's NextOID"Latest checkpoint's NextXID:0/726Latest checkpoint's NextOID:16392[gpadmin@segment03 ~]$ pg_controldata /home/gpadmin/gpdata/gpdatap2/gpseg5 | grep -E "Latest checkpoint's NextXID|Latest checkpoint's NextOID"Latest checkpoint's NextXID:0/726Latest checkpoint's NextOID:16392[gpadmin@segment03 ~]$ [gpadmin@segment03 ~]$ pg_resetxlog -o 16392 -x 726 -f /home/gpadmin/gpdata/gpdatap1/gpseg4WARNING: Do not use this on Greenplum. pg_resetxlog might cause data lossand render system irrecoverable. Do you wish to proceed? [yes/no] yesTransaction log reset[gpadmin@segment03 ~]$ pg_resetxlog -o 16392 -x 726 -f /home/gpadmin/gpdata/gpdatap2/gpseg5WARNING: Do not use this on Greenplum. pg_resetxlog might cause data lossand render system irrecoverable. Do you wish to proceed? [yes/no] yesTransaction log reset

截止到这,整个集群各节点的数据目录的pg_xlog已经清理完毕

启动gp集群服务【gpmaster操作】

# 启动gp集群[gpadmin@gpmaster gpdata]$ gpstart0630:16:48:34:006792 gpstart:gpmaster:gpadmin-[INFO]:-Starting gpstart with args: 0630:16:48:34:006792 gpstart:gpmaster:gpadmin-[INFO]:-Gathering information and validating the environment...0630:16:48:35:006792 gpstart:gpmaster:gpadmin-[INFO]:- Host Datadir Port0630:16:48:35:006792 gpstart:gpmaster:gpadmin-[INFO]:- segment01 /home/gpadmin/gpdata/gpdatap1/gpseg0 60000630:16:48:35:006792 gpstart:gpmaster:gpadmin-[INFO]:- segment01 /home/gpadmin/gpdata/gpdatap2/gpseg1 60010630:16:48:35:006792 gpstart:gpmaster:gpadmin-[INFO]:- segment02 /home/gpadmin/gpdata/gpdatap1/gpseg2 60000630:16:48:35:006792 gpstart:gpmaster:gpadmin-[INFO]:- segment02 /home/gpadmin/gpdata/gpdatap2/gpseg3 60010630:16:48:35:006792 gpstart:gpmaster:gpadmin-[INFO]:- segment03 /home/gpadmin/gpdata/gpdatap1/gpseg4 60000630:16:48:35:006792 gpstart:gpmaster:gpadmin-[INFO]:- segment03 /home/gpadmin/gpdata/gpdatap2/gpseg5 6001Continue with Greenplum instance startup Yy|Nn (default=N):> y0630:16:48:37:006792 gpstart:gpmaster:gpadmin-[INFO]:-Commencing parallel segment instance startup, please wait...0630:16:48:37:006792 gpstart:gpmaster:gpadmin-[INFO]:-Checking if standby master is running on host: gpstandby in directory: /home/gpadmin/gpdata/gpmaster/gpseg-10630:16:48:38:006792 gpstart:gpmaster:gpadmin-[INFO]:-Database successfully started# 查看集群运行状态[gpadmin@gpmaster gpseg-1]$ gpstate0630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:-Starting gpstate with args: 0630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 6.19.0 build commit:c88801925e5d51e4be57f54ff2158ed74a264b8e Open Source'0630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 9.4.26 (Greenplum Database 6.19.0 build commit:c88801925e5d51e4be57f54ff2158ed74a264b8e Open Source) on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 6.4.0, 64-bit compiled on Dec 8 23:08:44'0630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:-Obtaining Segment details from master...0630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:-Gathering data from segments...0630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:-Greenplum instance status summary0630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:-----------------------------------------------------0630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:- Master instance = Active0630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:- Master standby = gpstandby0630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:- Standby master state = Standby host passive0630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:- Total segment instance count from metadata= 60630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:-----------------------------------------------------0630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:- Primary Segment Status0630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:-----------------------------------------------------0630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:- Total primary segments= 60630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:- Total primary segment valid (at master) = 60630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:- Total primary segment failures (at master)= 00630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:- Total number of postmaster.pid files missing = 00630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:- Total number of postmaster.pid files found= 60630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:- Total number of postmaster.pid PIDs missing = 00630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:- Total number of postmaster.pid PIDs found= 60630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:- Total number of /tmp lock files missing = 00630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:- Total number of /tmp lock files found= 60630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:- Total number postmaster processes missing= 00630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:- Total number postmaster processes found = 60630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:-----------------------------------------------------0630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:- Mirror Segment Status0630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:-----------------------------------------------------0630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:- Mirrors not configured on this array0630:17:20:19:007703 gpstate:gpmaster:gpadmin-[INFO]:-----------------------------------------------------# 登录数据库[gpadmin@gpmaster gpseg-1]$ psqlpsql (9.4.26)Type "help" for help.testdb=# testdb=# \dList of relationsSchema | Name | Type | Owner | Storage --------+----------------------+-------+---------+---------public | amr_n042_company_inv | table | gpadmin | heap(1 row)testdb=# testdb=# select * from amr_n042_company_inv limit 1;rec_id |eid| inv |pid| eid_inv | inv_type | blic_type | blic_no | country | currency | sub_conam | acc_conam | sub_conam_usd| acc_conam_usd| con_prop | con_form | con_date | created |update_time| delete_flg ----------------------------------+----------------------------------+------------------+----------------------------------+---------+----------+-----------+---------+---------+----------+-----------+-----------------------+----------------------+----------------------+------------------------+----------+------------+---------------------+---------------------+------------95e6834d0a3d99e9ea8811855ae9229d | f1a385984a2b3860624906202b84cfe1 | 测试数据测试数据 | 67f87ebe99ad9e6c21abeae9482ab52d | | 20 | | | 156| 156| 50.000000 | 50.000000000000000000 | 7.370000000000000000 | 7.370000000000000000 | 100.000000000000000000 || -05-31 | -11-23 15:14:59 | -02-12 08:07:48 | 0(1 row)

集群运行一切正常,清理流程至此实施完毕

执行数据库导入数据任务报错

WARNING: database with OID 0 must be vacuumed within 147483647 transactions (seg1 192.168.0.243:6001 pid=7269)

testdb=# \timing Timing is on.testdb=# \i amr_n042_company_modify.sql SETTime: 5.479 msset_config ------------(1 row)Time: 9.566 msSETTime: 0.337 msSETTime: 0.277 msSETTime: 0.260 msSETTime: 0.257 msSETTime: 0.246 mspsql:amr_n042_company_modify.sql:36: WARNING: database with OID 0 must be vacuumed within 147483647 transactionsHINT: To avoid a database shutdown, execute a database-wide VACUUM in that database.psql:amr_n042_company_modify.sql:36: WARNING: database with OID 0 must be vacuumed within 147483647 transactions (seg2 192.168.0.98:6000 pid=7305)HINT: To avoid a database shutdown, execute a database-wide VACUUM in that database.psql:amr_n042_company_modify.sql:36: WARNING: database with OID 0 must be vacuumed within 147483647 transactions (seg3 192.168.0.98:6001 pid=7306)HINT: To avoid a database shutdown, execute a database-wide VACUUM in that database.psql:amr_n042_company_modify.sql:36: WARNING: database with OID 0 must be vacuumed within 147483647 transactions (seg0 192.168.0.243:6000 pid=7270)HINT: To avoid a database shutdown, execute a database-wide VACUUM in that database.psql:amr_n042_company_modify.sql:36: WARNING: database with OID 0 must be vacuumed within 147483647 transactions (seg1 192.168.0.243:6001 pid=7269)HINT: To avoid a database shutdown, execute a database-wide VACUUM in that database.psql:amr_n042_company_modify.sql:36: WARNING: database with OID 0 must be vacuumed within 147483647 transactions (seg5 192.168.0.156:6001 pid=12038)HINT: To avoid a database shutdown, execute a database-wide VACUUM in that database.psql:amr_n042_company_modify.sql:36: WARNING: database with OID 0 must be vacuumed within 147483647 transactions (seg4 192.168.0.156:6000 pid=12037)HINT: To avoid a database shutdown, execute a database-wide VACUUM in that database.CREATE TABLETime: 85.925 mspsql:amr_n042_company_modify.sql:39: WARNING: database with OID 0 must be vacuumed within 147483646 transactionsHINT: To avoid a database shutdown, execute a database-wide VACUUM in that database.

发现本次清理pg_xlog后,在数据库写入数据时,报了大量的WARNING警告

WARNING: database with OID 0 must be vacuumed within 147483647 transactions (seg1 192.168.0.243:6001 pid=7269)

HINT: To avoid a database shutdown, execute a database-wide VACUUM in that database.

处理办法

【注意】:这里的数据库是根据自己真实场景的数据库名称来执行 VACUUM FREEZE

# 执行vacuum freeze操作[gpadmin@gpmaster ~]$ nohup psql -c " VACUUM FREEZE " testdb 2>&1 &[gpadmin@gpmaster ~]$ nohup psql -c " VACUUM FREEZE " postgres 2>&1 & # 耐心等待需要使用的database库逐个执行完VACUUM FREEZE

vacuum freeze 命令表示强制对表或数据库进行freeze 操作。freeze 操作是为了保证整个数据库的最老最新事务差不能超过20亿,从而防止事务ID 发生回卷

在PostgreSQL 中,XID 是用32位无符号数来表示的,很容易发生溢出的情况,造成事务可见性混乱

再次登录数据库执行\i amr_n042_company_modify.sql导入数据时发现不再有告警提示了。

如果觉得《greenplum(gp)postgre清理pg_xlog释放空间》对你有帮助,请点赞、收藏,并留下你的观点哦!

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。