Deepgreen/Greenplum删除节点步骤

Greenplum和Deepgreen官方都没有给出删除节点的方法和建议,但实际上,我们可以对节点进行删除。由于不确定性,删除节点极有可能导致其他的问题,所以还行做好备份,谨慎而为。下面是具体的步骤:

1.查看数据库当前状态(12个实例)

[gpadmin@sdw1 ~]$ gpstate
20170816:12:53:25:097578 gpstate:sdw1:gpadmin-[INFO]:-Starting gpstate with args:
20170816:12:53:25:097578 gpstate:sdw1:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.99.00 build Deepgreen DB'
20170816:12:53:25:097578 gpstate:sdw1:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.99.00 build Deepgreen DB) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.9.2 20150212 (Red Hat 4.9.2-6) compiled on Jul  6 2017 03:04:10'
20170816:12:53:25:097578 gpstate:sdw1:gpadmin-[INFO]:-Obtaining Segment details from master...
20170816:12:53:25:097578 gpstate:sdw1:gpadmin-[INFO]:-Gathering data from segments...
..
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-Greenplum instance status summary
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Master instance                                = Active
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Master standby                                 = No master standby configured
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Total segment instance count from metadata     = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Primary Segment Status
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Total primary segments                         = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Total primary segment valid (at master)        = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Total primary segment failures (at master)     = 0
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Total number of postmaster.pid files missing   = 0
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Total number of postmaster.pid files found     = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs missing    = 0
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Total number of postmaster.pid PIDs found      = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Total number of /tmp lock files missing        = 0
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Total number of /tmp lock files found          = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Total number postmaster processes missing      = 0
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Total number postmaster processes found        = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Mirror Segment Status
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-   Mirrors not configured on this array
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-----------------------------------------------------

2.并行备份数据库

使用 gpcrondump 命令备份数据库,这里不赘述,不明白的可以翻看文档。

3.关闭当前数据库

[gpadmin@sdw1 ~]$ gpstop -M fast
20170816:12:54:10:097793 gpstop:sdw1:gpadmin-[INFO]:-Starting gpstop with args: -M fast
20170816:12:54:10:097793 gpstop:sdw1:gpadmin-[INFO]:-Gathering information and validating the environment...
20170816:12:54:10:097793 gpstop:sdw1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20170816:12:54:10:097793 gpstop:sdw1:gpadmin-[INFO]:-Obtaining Segment details from master...
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 4.3.99.00 build Deepgreen DB'
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:---------------------------------------------
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-Master instance parameters
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:---------------------------------------------
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   Master Greenplum instance process active PID   = 31250
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   Database                                       = template1
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   Master port                                    = 5432
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   Master directory                               = /hgdata/master/hgdwseg-1
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   Shutdown mode                                  = fast
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   Timeout                                        = 120
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   Shutdown Master standby host                   = Off
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:---------------------------------------------
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-Segment instances that will be shutdown:
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:---------------------------------------------
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   Host   Datadir                     Port    Status
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg0    25432   u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg1    25433   u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg2    25434   u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg3    25435   u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg4    25436   u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg5    25437   u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg6    25438   u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg7    25439   u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg8    25440   u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg9    25441   u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg10   25442   u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg11   25443   u

Continue with Greenplum instance shutdown Yy|Nn (default=N):
> y
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-There are 0 connections to the database
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='fast'
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-Master host=sdw1
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-Detected 0 connections to database
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-Using standard WAIT mode of 120 seconds
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=fast
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-Master segment instance directory=/hgdata/master/hgdwseg-1
20170816:12:54:13:097793 gpstop:sdw1:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process
20170816:12:54:13:097793 gpstop:sdw1:gpadmin-[INFO]:-Terminating processes for segment /hgdata/master/hgdwseg-1
20170816:12:54:13:097793 gpstop:sdw1:gpadmin-[INFO]:-No standby master host configured
20170816:12:54:13:097793 gpstop:sdw1:gpadmin-[INFO]:-Commencing parallel segment instance shutdown, please wait...
20170816:12:54:13:097793 gpstop:sdw1:gpadmin-[INFO]:-0.00% of jobs completed
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-100.00% of jobs completed
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-   Segments stopped successfully      = 12
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-   Segments with errors during stop   = 0
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-Successfully shutdown 12 of 12 segment instances
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-Database successfully shutdown with no errors reported
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-Cleaning up leftover gpmmon process
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-No leftover gpmmon process found
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-Cleaning up leftover gpsmon processes
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-No leftover gpsmon processes on some hosts. not attempting forceful termination on these hosts
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-Cleaning up leftover shared memory

4.以管理模式启动数据库

[gpadmin@sdw1 ~]$ gpstart -m
20170816:12:54:40:098061 gpstart:sdw1:gpadmin-[INFO]:-Starting gpstart with args: -m
20170816:12:54:40:098061 gpstart:sdw1:gpadmin-[INFO]:-Gathering information and validating the environment...
20170816:12:54:40:098061 gpstart:sdw1:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 4.3.99.00 build Deepgreen DB'
20170816:12:54:40:098061 gpstart:sdw1:gpadmin-[INFO]:-Greenplum Catalog Version: '201310150'
20170816:12:54:40:098061 gpstart:sdw1:gpadmin-[INFO]:-Master-only start requested in configuration without a standby master.

Continue with master-only startup Yy|Nn (default=N):
> y
20170816:12:54:41:098061 gpstart:sdw1:gpadmin-[INFO]:-Starting Master instance in admin mode
20170816:12:54:42:098061 gpstart:sdw1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20170816:12:54:42:098061 gpstart:sdw1:gpadmin-[INFO]:-Obtaining Segment details from master...
20170816:12:54:42:098061 gpstart:sdw1:gpadmin-[INFO]:-Setting new master era
20170816:12:54:42:098061 gpstart:sdw1:gpadmin-[INFO]:-Master Started...

5.登陆管理数据库

[gpadmin@sdw1 ~]$ PGOPTIONS="-c gp_session_role=utility" psql -d postgres
psql (8.2.15)
Type "help" for help.

6.删除segment

postgres=# select * from gp_segment_configuration;
 dbid | content | role | preferred_role | mode | status | port  | hostname | address | replication_port | san_mounts
------+---------+------+----------------+------+--------+-------+----------+---------+------------------+------------
    1 |      -1 | p    | p              | s    | u      |  5432 | sdw1     | sdw1    |                  |
    2 |       0 | p    | p              | s    | u      | 25432 | sdw1     | sdw1    |                  |
    3 |       1 | p    | p              | s    | u      | 25433 | sdw1     | sdw1    |                  |
    4 |       2 | p    | p              | s    | u      | 25434 | sdw1     | sdw1    |                  |
    5 |       3 | p    | p              | s    | u      | 25435 | sdw1     | sdw1    |                  |
    6 |       4 | p    | p              | s    | u      | 25436 | sdw1     | sdw1    |                  |
    7 |       5 | p    | p              | s    | u      | 25437 | sdw1     | sdw1    |                  |
    8 |       6 | p    | p              | s    | u      | 25438 | sdw1     | sdw1    |                  |
    9 |       7 | p    | p              | s    | u      | 25439 | sdw1     | sdw1    |                  |
   10 |       8 | p    | p              | s    | u      | 25440 | sdw1     | sdw1    |                  |
   11 |       9 | p    | p              | s    | u      | 25441 | sdw1     | sdw1    |                  |
   12 |      10 | p    | p              | s    | u      | 25442 | sdw1     | sdw1    |                  |
   13 |      11 | p    | p              | s    | u      | 25443 | sdw1     | sdw1    |                  |
(13 rows)
postgres=# set allow_system_table_mods='dml';
SET
postgres=# delete from gp_segment_configuration where dbid=13;
DELETE 1
postgres=# select * from gp_segment_configuration;
 dbid | content | role | preferred_role | mode | status | port  | hostname | address | replication_port | san_mounts
------+---------+------+----------------+------+--------+-------+----------+---------+------------------+------------
    1 |      -1 | p    | p              | s    | u      |  5432 | sdw1     | sdw1    |                  |
    2 |       0 | p    | p              | s    | u      | 25432 | sdw1     | sdw1    |                  |
    3 |       1 | p    | p              | s    | u      | 25433 | sdw1     | sdw1    |                  |
    4 |       2 | p    | p              | s    | u      | 25434 | sdw1     | sdw1    |                  |
    5 |       3 | p    | p              | s    | u      | 25435 | sdw1     | sdw1    |                  |
    6 |       4 | p    | p              | s    | u      | 25436 | sdw1     | sdw1    |                  |
    7 |       5 | p    | p              | s    | u      | 25437 | sdw1     | sdw1    |                  |
    8 |       6 | p    | p              | s    | u      | 25438 | sdw1     | sdw1    |                  |
    9 |       7 | p    | p              | s    | u      | 25439 | sdw1     | sdw1    |                  |
   10 |       8 | p    | p              | s    | u      | 25440 | sdw1     | sdw1    |                  |
   11 |       9 | p    | p              | s    | u      | 25441 | sdw1     | sdw1    |                  |
   12 |      10 | p    | p              | s    | u      | 25442 | sdw1     | sdw1    |                  |
(12 rows)

7.删除filespace

postgres=# select * from pg_filespace_entry;
 fsefsoid | fsedbid |        fselocation
----------+---------+---------------------------
     3052 |       1 | /hgdata/master/hgdwseg-1
     3052 |       2 | /hgdata/primary/hgdwseg0
     3052 |       3 | /hgdata/primary/hgdwseg1
     3052 |       4 | /hgdata/primary/hgdwseg2
     3052 |       5 | /hgdata/primary/hgdwseg3
     3052 |       6 | /hgdata/primary/hgdwseg4
     3052 |       7 | /hgdata/primary/hgdwseg5
     3052 |       8 | /hgdata/primary/hgdwseg6
     3052 |       9 | /hgdata/primary/hgdwseg7
     3052 |      10 | /hgdata/primary/hgdwseg8
     3052 |      11 | /hgdata/primary/hgdwseg9
     3052 |      12 | /hgdata/primary/hgdwseg10
     3052 |      13 | /hgdata/primary/hgdwseg11
(13 rows)
postgres=#  delete from pg_filespace_entry where fsedbid=13;
DELETE 1
postgres=# select * from pg_filespace_entry;
 fsefsoid | fsedbid |        fselocation
----------+---------+---------------------------
     3052 |       1 | /hgdata/master/hgdwseg-1
     3052 |       2 | /hgdata/primary/hgdwseg0
     3052 |       3 | /hgdata/primary/hgdwseg1
     3052 |       4 | /hgdata/primary/hgdwseg2
     3052 |       5 | /hgdata/primary/hgdwseg3
     3052 |       6 | /hgdata/primary/hgdwseg4
     3052 |       7 | /hgdata/primary/hgdwseg5
     3052 |       8 | /hgdata/primary/hgdwseg6
     3052 |       9 | /hgdata/primary/hgdwseg7
     3052 |      10 | /hgdata/primary/hgdwseg8
     3052 |      11 | /hgdata/primary/hgdwseg9
     3052 |      12 | /hgdata/primary/hgdwseg10
(12 rows)

8.退出管理模式,正常启动数据库

[gpadmin@sdw1 ~]$ gpstop -m
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Starting gpstop with args: -m
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Gathering information and validating the environment...
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Obtaining Segment details from master...
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 4.3.99.00 build Deepgreen DB'
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-There are 0 connections to the database
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='smart'
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Master host=sdw1
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=smart
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Master segment instance directory=/hgdata/master/hgdwseg-1
20170816:12:56:53:098095 gpstop:sdw1:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process
20170816:12:56:53:098095 gpstop:sdw1:gpadmin-[INFO]:-Terminating processes for segment /hgdata/master/hgdwseg-1
[gpadmin@sdw1 ~]$ gpstart
20170816:12:57:02:098112 gpstart:sdw1:gpadmin-[INFO]:-Starting gpstart with args:
20170816:12:57:02:098112 gpstart:sdw1:gpadmin-[INFO]:-Gathering information and validating the environment...
20170816:12:57:02:098112 gpstart:sdw1:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 4.3.99.00 build Deepgreen DB'
20170816:12:57:02:098112 gpstart:sdw1:gpadmin-[INFO]:-Greenplum Catalog Version: '201310150'
20170816:12:57:02:098112 gpstart:sdw1:gpadmin-[INFO]:-Starting Master instance in admin mode
20170816:12:57:03:098112 gpstart:sdw1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20170816:12:57:03:098112 gpstart:sdw1:gpadmin-[INFO]:-Obtaining Segment details from master...
20170816:12:57:03:098112 gpstart:sdw1:gpadmin-[INFO]:-Setting new master era
20170816:12:57:03:098112 gpstart:sdw1:gpadmin-[INFO]:-Master Started...
20170816:12:57:03:098112 gpstart:sdw1:gpadmin-[INFO]:-Shutting down master
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:---------------------------
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Master instance parameters
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:---------------------------
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Database                 = template1
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Master Port              = 5432
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Master directory         = /hgdata/master/hgdwseg-1
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Timeout                  = 600 seconds
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Master standby           = Off
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:---------------------------------------
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Segment instances that will be started
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:---------------------------------------
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-   Host   Datadir                     Port
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg0    25432
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg1    25433
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg2    25434
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg3    25435
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg4    25436
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg5    25437
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg6    25438
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg7    25439
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg8    25440
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg9    25441
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-   sdw1   /hgdata/primary/hgdwseg10   25442

Continue with Greenplum instance startup Yy|Nn (default=N):
> y
20170816:12:57:07:098112 gpstart:sdw1:gpadmin-[INFO]:-Commencing parallel segment instance startup, please wait...
.......
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-Process results...
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-   Successful segment starts                                            = 11
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-   Failed segment starts                                                = 0
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-   Skipped segment starts (segments are marked down in configuration)   = 0
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-Successfully started 11 of 11 segment instances
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-Starting Master instance sdw1 directory /hgdata/master/hgdwseg-1
20170816:12:57:15:098112 gpstart:sdw1:gpadmin-[INFO]:-Command pg_ctl reports Master sdw1 instance active
20170816:12:57:15:098112 gpstart:sdw1:gpadmin-[INFO]:-No standby master configured.  skipping...
20170816:12:57:15:098112 gpstart:sdw1:gpadmin-[INFO]:-Database successfully started

9.将删除节点的备份文件使用psql恢复到当前数据库

psql -d postgres -f xxxx.sql #这里不赘述恢复过程

备注:

1)本文使用的是只恢复删除节点的数据。

2)本文的过程,逆向执行,可以将删除的节点重新添加回来,但是数据恢复起来比较耗时,与重新建库恢复差不多。

时间: 2024-09-16 14:08:26

Deepgreen/Greenplum删除节点步骤的相关文章

Deepgreen & Greenplum 高可用(一) - Segment节点故障转移

尚书中云:惟事事,乃其有备,有备无患.这教导我们做事一定要有准备,做事尚且如此,在企事业单位发展中处于基础地位的数据仓库软件在运行过程中,何尝不需要有备无患呢? 今天别的不表,主要来谈谈企业级数据仓库软件Deepgreen和Greenplum的高可用特性之一:计算节点镜像. 一.首先从理论上来讲,正常Segment节点和他的Mirror是分布在不同主机上的,以防止单点故障导致的数据库访问异常.当正常Segment节点出现故障时,Mirror节点可以自动接管Segment节点的服务,数据库仍然可以

初学js 新节点的创建 删除 的步骤_javascript技巧

特羡慕写出这些特效的高级程序员.我想学习,可总是不知道怎么去思考,不知道怎么去逻辑.有时候也很着急,这些都不怕,幸好还有人教我,指点我,这是我比较幸运的.但是我过不了自己这关了,自己最大的缺点就是 逃避,不会做的就放弃了,不会了就不会了,也不敢问了.改,这个大缺点一定得改.以下,是洋哥指点我的学习技巧,思路清晰,效率也有很大的提高..废话就不扯了,言归正传: 题目:btton 按钮 一个添加 一个删除 ,点击添加按钮就会添加一个节点,点击删除按钮就会删除最后一个节点,添加的新元素点击一下就会被删

Deepgreen & Greenplum DBA小白普及课之一(一般问题解答)

不积跬步无以至千里,要想成为一名合格的数据库管理员,首先应该具备扎实的基础知识及问题处理能力.本文参考Pivotal官方FAQ,对一些在使用和管理Deepgreen & Greenplum时经常会遇到的普通问题进行解答.希望对大家有所帮助,如果有朋友有更多的问题分享,请留言,我将一并整理. 下面单刀直入,开始问题浏览及解决思路梳理: 1.如何检查一张表的分区策略? 测试表:region 表的详细描述信息可以展示其分区策略:Distributed by: (r_regionkey) tpch=#

Deepgreen & Greenplum DBA小白普及课之二(管理问题解答)

不积跬步无以至千里,要想成为一名合格的数据库管理员,首先应该具备扎实的基础知识及问题处理能力.本文参考Pivotal官方FAQ,对在管理Deepgreen & Greenplum时经常会遇到的问题提出解决思路/答案.希望对大家有所帮助,如果有朋友有更多的问题分享,请留言,我将一并整理. 下面单刀直入,开始问题浏览及解决思路梳理: 1.执行gpstart命令失败了,我该怎么办? 查看gpstart命令在Master执行失败后返回的错误原因: 检查gpstart的启动日志,这里可能存着更详细的信息,

Deepgreen & Greenplum DBA小白普及课之三(备份问题解答)

不积跬步无以至千里,要想成为一名合格的数据库管理员,首先应该具备扎实的基础知识及问题处理能力.本文参考Pivotal官方FAQ,对在管理Deepgreen & Greenplum时经常会遇到的问题提出解决思路/答案,本篇主要讲备份方面的问题.希望对大家有所帮助,如果有朋友有更多的问题分享,请留言,我将一并整理. 1.简单描述一下Deepgreen & Greenplum的备份架构? 当我们执行全库备份操作时,后台进行了如下操作: 检查备份命令语法 检查备份目录是否存在,如果不存在便创建目录

Deepgreen & Greenplum 高可用(二) - Master故障转移

上一篇文章中提到了Segment节点的故障转移,其中主要涉及Mirror添加.故障自动切换.故障修复后balanced到原集群状态,以及一些建议.具体请移步传送门--><Deepgreen & Greenplum 高可用(一) - Segment节点故障转移>. 书接上文,今天来谈一谈Master节点的故障转移. 一.首先从理论上来讲,要达到Master节点的单点保障,Master Standby要与Master部署在不同的服务器上.当Master节点故障时,同步程序停止,此时手

hadoop教程(十二) HDFS添加删除节点并进行集群平衡

HDFS添加删除节点并进行hdfs balance 方式1:静态添加datanode,停止namenode方式 1.停止namenode 2.修改slaves文件,并更新到各个节点 3.启动namenode 4.执行hadoop balance命令.(此项为balance集群使用,如果只是添加节点,则此步骤不需要) ----------------------------------------- 方式2:动态添加datanode,不停namenode方式 1.修改slaves文件,添加需要增加

access-delphi treeview动态增加、修改、删除节点,并加入数据库(ACCESS)

问题描述 delphi treeview动态增加.修改.删除节点,并加入数据库(ACCESS) delphi treeview动态增加.修改.删除节点,并加入数据库(ACCESS) 我是大菜鸟,请告知步骤,尤其是数据库控件和MDB文件怎么弄? 解决方案 http://www.tuicool.com/articles/QNjyee 解决方案二: ACESS的连接用ADO方式,TADOConnection设ConnectionString值为"路径+数据库名" 打开数据库连接:设置Conn

Oracle Rac 11R2删除节点

在将一个节点从cluster删除之前,先删除节点数据库实例及Oracle RAC软件 在Oracle RAC环境中删除集群中的节点: 1.删除数据库实例从Oracle RAC databases 1.1.如果有安装配置EM,在您计划删除的节点上,运行以下命令,从EM配置中删除 emca -deleteNode db 要删除策略管理的数据库,减少数据库实例所在的服务器池的大小.这样可以有效地删除实例,而不必从节点或集群中的节点删除Oracle RAC软件 例如,您可以通过在集群中的任何节点上运行以