Options

How to restore data into 3 nodes?

Hello,
Initially, I have a single node, then I moved to 3 node cluster, I backup the data with single node, now I need to restore backup data into 3 nodes

[root@host1 user_testing]# cat vertica_backup.ini 
[Misc]
snapshotName = vertica_backup
restorePointLimit = 1
objectRestoreMode = createOrReplace
tempDir = /tmp/vbr
retryCount = 5
retryDelay = 3
passwordFile = pwdFile

[Database]
dbName = db_testing_0
dbUser = user_testing
dbPromptForPassword = True

[Transmission]
encrypt = False
checksum = False
serviceAccessUser = user_testing
total_bwlimit_backup = 0
concurrency_backup = 1
total_bwlimit_restore = 0
concurrency_restore = 1

[Mapping]
v_db_testing_0_node0001 = host1:home/user_testing/backup/

[NodeMapping]
v_db_testing_0_node0001 = v_db_testing_0_node0001
v_db_testing_0_node0001 = v_db_testing_0_node0002
v_db_testing_0_node0001 = v_db_testing_0_node0003

I have mentioned NodeMapping like this, Is this correct?

Comments

  • Options
    edited February 2018

    When I start to restore process, Only node 1 is participating, another 2 nodes are missing

    [user_testing@host1 ~]$ /opt/vertica/bin/vbr -t restore --config-file vertica_backup.ini 
    
    Warning: config mapping is missing entries for nodes: v_db_testing_0_node0003, v_db_testing_0_node0002
    
    Starting full restore of database db_testing_0.
    Participating nodes: v_db_testing_0_node0001.
    Restoring from restore point: vertica_backup_20180209_051946
    Determining what data to restore from backup.
    [==================================================] 100%
    Approximate bytes to copy: 46865774354 of 46865774354 total.
    Syncing data from backup to cluster nodes.
    [=======================...........................] 46%
    

    What I have to do?
    Thanks in advance !!

  • Options

    After restore completed, I started database using adminTools

    *** Starting database: db_testing_0 ***
        Starting nodes: 
            v_db_testing_0_node0001 (103.##.##.##)
        Starting Vertica on all nodes. Please wait, databases with a large catalog may take a while to initialize.
        Node Status: v_db_testing_0_node0001: (DOWN) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (DOWN) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (DOWN) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (DOWN) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (DOWN) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (DOWN) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (DOWN) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (DOWN) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (DOWN) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (INITIALIZING) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
    Nodes DOWN: v_db_testing_0_node0001, v_db_testing_0_node0003, v_db_testing_0_node0002 (may be still initializing).
    It is suggested that you continue waiting.
    Do you want to continue waiting? (yes/no) [yes] yes
        Node Status: v_db_testing_0_node0001: (UP) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (UP) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (UP) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (UP) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (UP) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (UP) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (UP) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (UP) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (UP) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (UP) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
    Nodes UP: v_db_testing_0_node0001
    Nodes DOWN: v_db_testing_0_node0003, v_db_testing_0_node0002 (may be still initializing).
    It is suggested that you continue waiting.
    Do you want to continue waiting? (yes/no) [yes] yes
        Node Status: v_db_testing_0_node0001: (UP) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (UP) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (UP) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (UP) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (UP) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (UP) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (UP) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (UP) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (UP) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
        Node Status: v_db_testing_0_node0001: (UP) v_db_testing_0_node0002: (DOWN) v_db_testing_0_node0003: (DOWN) 
    Nodes UP: v_db_testing_0_node0001
    Nodes DOWN: v_db_testing_0_node0003, v_db_testing_0_node0002 (may be still initializing).
    It is suggested that you continue waiting.
    Do you want to continue waiting? (yes/no) [yes] yes
    

    Still waiting :(

  • Options

    Is there anybody to help me :'(

  • Options
    s_crossmans_crossman Vertica Employee Employee

    Hi,

    Per the Restoring a Full Backup section of the docs:
    To restore a full database backup, you must verify that:
    ...
    The cluster to which you are restoring the backup has:
    The same number of hosts as the one used to create the backup
    Identical node names

    You can only restore a full backup to a cluster with the same number of nodes and those nodes having the exact same names as the source cluster's nodes. So it is not possible to restore a backup from a single node to a multiple node cluster. Recommendation is create a single node cluster, restore from the backup, verify the database comes up, then add the other two nodes to the cluster then the database and let Vertica do the redistribution of data across the nodes.

    I hope it helps.

  • Options

    hello @s_crossman
    The same thing I have done but another 2 nodes are DOWN

  • Options
    s_crossmans_crossman Vertica Employee Employee

    Hi,

    Please provide details on the order of things you did so it's clear what was done when. Basically:

    • how many nodes in the database when you took the backup (was it before or after adding the 2)
    • were any nodes in the database down when you took the backup
    • how many nodes in the database when you attempted the restore
    • were there any of the original database hosts down during the restore

    There are a lot of requirements for the backup and restore, any of them not met will result in failure.

    Of note, the NodeMapping can't be used as you had it in your first post configuration file. Each source node on the left must point to it's corresponding target node on the right. Node0001 to 0001, 0002 to 0002, etc. You can't map a single node to multiple target nodes.

    With the timeline of what you are trying to do and what you did in the order you did it may help us guide you better.

  • Options
    edited February 2018

    hello @s_crossman
    created a single node cluster, after restored from the backup, getting an error while starting the database

    *** Starting database: db_testing_0 ***
        Starting nodes: 
            v_db_testing_0_node0001 (103.##.##.##)
        Starting Vertica on all nodes. Please wait, databases with a large catalog may take a while to initialize.
        103.##.##.##  failed. Result:
    <ATResult> status=2 host=103.##.##.## error_type=<class 'vertica.engine.api.errors.ATReceiveFailure_Init'> error_message=Problem json decoding message '{"status": null, "content": {"special_environment": null}, "error_type": null, "error_message": null, "exec_stack": null}'. Error was: None is not a valid status.
            Stack Trace from Initiator follows
    Traceback (most recent call last):
      File "/opt/vertica/oss/python/lib/python2.7/site-packages/vertica/network/adapters/bash_adapter.py", line 250, in exec_module
        result = self._exec_module_impl(command, timeout)
      File "/opt/vertica/oss/python/lib/python2.7/site-packages/vertica/network/adapters/bash_adapter.py", line 272, in _exec_module_impl
        result.from_wire_message(raw_buffer)
      File "/opt/vertica/oss/python/lib/python2.7/site-packages/vertica/engine/api/at_result.py", line 82, in from_wire_message
        "{0}. Error was: {1}.".format(repr(msg), e))
    ATReceiveFailure_Init: Problem json decoding message '{"status": null, "content": {"special_environment": null}, "error_type": null, "error_message": null, "exec_stack": null}'. Error was: None is not a valid status.
    Check log for more information.
    Press RETURN to continue
    

    What I have to do?

  • Options
    s_crossmans_crossman Vertica Employee Employee

    Hi,

    Unable to decipher much from that output. The only hints may be in the /tmp dir where the vbr logs are posted. You could find the log related to this attempt and review to see if any additional info.

    Again, if the original backup was done against a multi-node cluster/database (regardless of how many were functional) you cannot restore it to a single node cluster/database. I couldn't tell that detail as you didn't supply the specific info requested in the last message on this thread.

    Please provide details on the order of things you did so it's clear what was done when. Basically:
    how many nodes in the database when you took the backup (was it before or after adding the 2)
    were any nodes in the database down when you took the backup
    how many nodes in the database when you attempted the restore (** looks like in this latest attempt 1 but would like to confirm)
    were there any of the original database hosts down during the restore

  • Options

    Only one node in the database when I took the backup. After taking backup I added 2 nodes to it

  • Options

    As restore doesn't work with a different number of nodes, you have to create a single node database as the original database (where you took the backup), restore the backup, start the single node DB and add 2 new nodes.

Leave a Comment

BoldItalicStrikethroughOrdered listUnordered list
Emoji
Image
Align leftAlign centerAlign rightToggle HTML viewToggle full pageToggle lights
Drop image/file