Options

Best practice to swap out nodes for maintenance.

The sysadmins for our database cluster would like to cycle through all nodes and, for each node, wipe/restructure the primary data disk partition.  The basic strategy we'd planned to use was:

 

1) Remove Node 1 from cluster (rebalancing to ensure remaining nodes are still in K-SAFE1 mode during maintenance & recovery of node 1)

2) Perform maintenance on node 1 (wiping out all data)

3) Add Node 1 as new node

4) Wait for Node 1 to switch from RECOVERING to UP

5) Repeat 1 - 4 for all other nodes (2, 3, 4...)

 

However, according to the documentation, a prerequisite of removing a node is: 

  • The node must be empty, in other words there should be no projections referring to the node. Ensure you have followed the steps listed in Removing Nodes to modify your database design.

 

This leads me to think the strategy may be biting off way more work than necesary (i.e. would require a lot of projection-level operations before removing each node "safely").

 

Would it make more sense to:

1) Simply stop the database on node 1 (putting KSafe-1 cluster into critical state).

2) Perform maintenance on that node - wiping all data contents of the node.

3) Restart database on node 1.

4) Wait on Node 1 to recover

5) Repeat steps 1-4 for remaining nodes

 

Beyond operating the cluster in critical mode during the maintenance & recovery windows, are there any other unforseen hazards to this approach?  Will Vertica recover each node cleanly, despite the fact that all pre-existing data will have been wiped out on that node?

Comments

  • Options

    The second approach is better - leaving the node in the cluster and just letting the node recover after the maintenance is complete.  You can do the maintenance on multiple nodes at once, depending on the size of the cluster.  You can't take down dependent nodes at the same time, and it's best to avoid having two nodes recovering from the same dependent nodes.  In other words if your nodes are node1 to node10, and the dependences are "simple" - node2 depends on neighboring nodes ndoe1 and node3, then avoid doing simultaneous maintenance on node2 and node4 since they would both be recovering data from node3 and node3's I/O would become a bottleneck.  But it would be fine to do simultaneous mantenance on node2 and node5.  

     

    You can verify node dependencies using "select get_node_dependencies()".  The bits represent the nodes with the rightmost column being node1.  Or checking the CRITICAL_NODES table as you take down nodes but it's usually better to know ahead of time what the dependencies are so that you can plan ahead.

     

    When you bring up the nodes that have had their data disks wiped, you will probably need to manually recreate the _catalog and _data directories.

     

      --Sharon

     

  • Options

    Thanks, I will take this advice into account!

Leave a Comment

BoldItalicStrikethroughOrdered listUnordered list
Emoji
Image
Align leftAlign centerAlign rightToggle HTML viewToggle full pageToggle lights
Drop image/file