Vertica Node Down

Hi Everyone,

We had an scenario where our 3 node cluster went down. It us hosted in Azure. My initial analysis shows that the problem started with node 01 and seems to be a network issue but I'm still trying to verify that one. Here are the logs.

Vertica.log

`2017-04-03 20:56:43.093 AnalyzeRowCount:0x7f852c020040-a00000001b6416 [Txn] Rollback Txn: a00000001b6416 'do_tm_task row count analyze'
2017-04-03 20:56:43.094 AnalyzeRowCount:0x7f852c020040-a00000001b6417 [Txn] Begin Txn: a00000001b6417 'static void CAT::CatalogQueries::reserveJVMForProjectionIfReqd(OidSet)'
2017-04-03 20:56:43.094 AnalyzeRowCount:0x7f852c020040-a00000001b6417 [Txn] Rollback Txn: a00000001b6417 'static void CAT::CatalogQueries::reserveJVMForProjectionIfReqd(OidSet)'
2017-04-03 20:56:43.094 AnalyzeRowCount:0x7f852c020040 [Command] TMTask: row count analyze - start
2017-04-03 20:56:43.102 AnalyzeRowCount:0x7f852c020040 [Command] TMTask: row count analyze - Done
2017-04-03 20:56:43.103 AnalyzeRowCount:0x7f852c020040 [Util] Task 'AnalyzeRowCount' enabled
2017-04-03 20:56:53.000 DiskSpaceRefresher:0x7f852c01dcf0 [Util] Task 'DiskSpaceRefresher' enabled
2017-04-03 20:57:03.000 DiskSpaceRefresher:0x7f852c01dcf0 [Util] Task 'DiskSpaceRefresher' enabled
2017-04-03 20:57:13.000 DiskSpaceRefresher:0x7f852c01e110 [Util] Task 'DiskSpaceRefresher' enabled
2017-04-03 20:57:36.731 DiskSpaceRefresher:0x7f852c01e110 [Util] Task 'DiskSpaceRefresher' enabled
2017-04-03 20:57:36.735 DiskSpaceRefresher:0x7f852c0176c0 [Util] Task 'DiskSpaceRefresher' enabled
2017-04-03 20:58:13.748 DiskSpaceRefresher:0x7f852c020aa0 [Util] Task 'DiskSpaceRefresher' enabled
2017-04-03 20:58:13.752 DiskSpaceRefresher:0x7f852c020aa0 [Util] Task 'DiskSpaceRefresher' enabled
2017-04-03 20:58:13.755 AnalyzeRowCount:0x7f852c020040-a00000001b6418 [Txn] Begin Txn: a00000001b6418 'getRowCountsForProj'
2017-04-03 20:58:13.807 AnalyzeRowCount:0x7f852c020040-a00000001b6418 [Txn] Rollback Txn: a00000001b6418 'getRowCountsForProj'
2017-04-03 20:58:23.001 DiskSpaceRefresher:0x7f852c01e110 [Util] Task 'DiskSpaceRefresher' enabled
2017-04-03 20:58:27.002 TM Moveout:0x7f852c01e110-a00000001b6419 [Txn] Begin Txn: a00000001b6419 'Moveout: Tuple Mover'
2017-04-03 20:58:27.002 TM Moveout:0x7f852c01e110-a00000001b6419 [Txn] Rollback Txn: a00000001b6419 'Moveout: Tuple Mover'
2017-04-03 20:58:27.003 TM Moveout:0x7f852c01e110 [Util] Task 'TM Moveout' enabled
2017-04-03 20:58:29.763 Spread Client:0x97091b0 [Comms] Saw membership message 8192 (0x2000) on V:medicaid
2017-04-03 20:58:29.763 Spread Client:0x97091b0 [Comms] Saw transitional message; watch for lost daemons
2017-04-03 20:58:29.763 Spread Client:0x97091b0 [Comms] Saw membership message 8192 (0x2000) on Vertica:all
2017-04-03 20:58:29.763 Spread Client:0x97091b0 [Comms] Saw transitional message; watch for lost daemons
2017-04-03 20:58:29.763 Spread Client:0x97091b0 [Comms] Saw membership message 8192 (0x2000) on Vertica:join
2017-04-03 20:58:29.763 Spread Client:0x97091b0 [Comms] Saw transitional message; watch for lost daemons
2017-04-03 20:58:29.763 Spread Client:0x97091b0 [Comms] Saw membership message 6144 (0x1800) on V:medicaid
2017-04-03 20:58:29.763 Spread Client:0x97091b0 [Comms] NETWORK change with 1 VS sets
2017-04-03 20:58:29.763 Spread Client:0x97091b0 [Comms] VS set #0 (mine) has 1 members (offset=36)

2017-04-03 20:58:29.763 Spread Client:0x97091b0 [Comms] VS set #0, member 0: #node_a#N010000000016

2017-04-03 20:58:29.763 Spread Client:0x97091b0 [Comms] DB Group changed
2017-04-03 20:58:29.763 Spread Client:0x97091b0 [VMPI] DistCall: Set current group members called with 1 members
2017-04-03 20:58:29.763 Spread Client:0x97091b0 [VMPI] Removing 45035996273721526 from list of initialized nodes for session v_medicaid_node0001-2678:0x28de36
2017-04-03 20:58:29.763 Spread Client:0x97091b0 [VMPI] Removing 45035996273721530 from list of initialized nodes for session v_medicaid_node0001-2678:0x28de36
2017-04-03 20:58:29.763 Spread Client:0x97091b0 [Comms] nodeSetNotifier: node v_medicaid_node0002 left the cluster
2017-04-03 20:58:29.763 Spread Client:0x97091b0 [Recover] Node left cluster, reassessing k-safety...
2017-04-03 20:58:29.763 Spread Client:0x97091b0 [Recover] Checking Deps:Down bits: 010 Deps:
011 - cnt: 25
101 - cnt: 25
110 - cnt: 25
111 - cnt: 339
2017-04-03 20:58:29.763 AnalyzeRowCount:0x7f852c020040 @v_medicaid_node0001: 01000/4539: Received no response from v_medicaid_node0002 in GetRowCountMessage
2017-04-03 20:58:29.763 Spread Client:0x97091b0 @v_medicaid_node0001: 00000/3298: Event Posted: Event Code:3 Event Id:0 Event Severity: Critical [2] PostedTimestamp: 2017-04-03 20:58:29.763538 ExpirationTimestamp: 2085-04-21 23:12:36.763538 EventCodeDescription: Current Fault Tolerance at Critical Level ProblemDescription: Loss of node v_medicaid_node0001 will cause shutdown to occur. K=1 total number of nodes=3 DatabaseName: medicaid Hostname: usdevdbv02
2017-04-03 20:58:29.763 AnalyzeRowCount:0x7f852c020040 @v_medicaid_node0001: 01000/4539: Received no response from v_medicaid_node0003 in GetRowCountMessage
2017-04-03 20:58:29.763 Spread Client:0x97091b0 [Comms] nodeSetNotifier: node v_medicaid_node0003 left the cluster
2017-04-03 20:58:29.763 Spread Client:0x97091b0 [Recover] Node left cluster, reassessing k-safety...
2017-04-03 20:58:29.763 Spread Client:0x97091b0 [Recover] Cluster partitioned: 3 total nodes, 1 up nodes, 2 down nodes
`

messages

Apr 3 20:50:01 usdevdbv02 systemd: Removed slice user-0.slice. Apr 3 20:50:01 usdevdbv02 systemd: Stopping user-0.slice. Apr 3 21:00:01 usdevdbv02 systemd: Created slice user-0.slice. Apr 3 21:00:01 usdevdbv02 systemd: Starting user-0.slice. Apr 3 21:00:01 usdevdbv02 systemd: Started Session 4501 of user root. Apr 3 21:00:01 usdevdbv02 systemd: Starting Session 4501 of user root. Apr 3 21:00:01 usdevdbv02 systemd: Removed slice user-0.slice. Apr 3 21:00:01 usdevdbv02 systemd: Stopping user-0.slice. Apr 3 21:01:01 usdevdbv02 systemd: Created slice user-0.slice. Apr 3 21:01:01 usdevdbv02 systemd: Starting user-0.slice. Apr 3 21:01:01 usdevdbv02 systemd: Started Session 4502 of user root. Apr 3 21:01:01 usdevdbv02 systemd: Starting Session 4502 of user root. Apr 3 21:01:01 usdevdbv02 systemd: Removed slice user-0.slice. Apr 3 21:01:01 usdevdbv02 systemd: Stopping user-0.slice. Apr 3 21:04:46 usdevdbv02 /usr/local/bin/azsecd[140998]: Starting 'heartbeat' scan Apr 3 21:04:46 usdevdbv02 azsecd: 2017/04/03 21:04:46 INFO: Starting 'heartbeat' scan Apr 3 21:04:47 usdevdbv02 /usr/local/bin/azsecd[140998]: Scan 'heartbeat' completed Apr 3 21:04:47 usdevdbv02 azsecd: 2017/04/03 21:04:47 INFO: Scan 'heartbeat' completed Apr 3 21:10:01 usdevdbv02 systemd: Created slice user-0.slice.

vnetperf

`The maximum recommended rtt latency is 2 milliseconds. The ideal rtt latency is 200 microseconds or less. It is recommended that clock skew be kept to under 1 second.

test | date | node | index | rtt latency (us) | clock skew (us)

latency | 2017-04-12_10:08:47,715 | 10.0.0.16 | 0 | 41 | 3
latency | 2017-04-12_10:08:47,715 | 10.0.0.19 | 1 | 537 | 968
latency | 2017-04-12_10:08:47,715 | 10.0.0.20 | 2 | 882 | 586`

Any input of direction would be highly appreciated.

Comments

  • emorenoemoreno Employee

    When you see in the log the message :

    NETWORK change with 1 VS sets

    That means that spread did not receive package of the other nodes. It can be that the package was lost or that spread did not read it. There is a good document in our community about how to debug spread

    I will recommend to read the doc to debug better.

    Hope this helps.
    Eugenia

  • edited April 2017

    Hi rsalayo,
    You say the cluster went down, meaning that it was running ok eariler? I ask for this reason, in Azure (like most virtual environments), the cluster must be installed using the point to point flag. UDP broadcast traffic in virtual networks tends to be "iffy", and Azure is no exception.

    -Chris

Leave a Comment

BoldItalicStrikethroughOrdered listUnordered list
Emoji
Image
Align leftAlign centerAlign rightToggle HTML viewToggle full pageToggle lights
Drop image/file