Can't create new db via MC.

This the issue I'm getting , think error 127 is not very informative to me .
No error before his output.

14 Jan 2018 11:09:51,627 [Thread-8555] StitchCluster INFO  - **** Done Processing stitching output ****
14 Jan 2018 11:09:51,628 [Thread-8555] StitchCluster INFO  - Step 4 FINISHED Stitch status = 
14 Jan 2018 11:09:51,630 [Thread-8555] GenericTaskBuilder INFO  - GenericTaskBuilder POST_INSTALL_CLEANUP
14 Jan 2018 11:09:51,641 [Thread-8555] PostInstallCleanupTask INFO  - ------START---------Executing Post Validation Script, host: 172.31.20.7-----------------------

14 Jan 2018 11:09:53,495 [Thread-8555] PostInstallCleanupTask INFO  - processStreams>> exitCode: 127
14 Jan 2018 11:09:53,495 [Thread-8555] PostInstallCleanupTask ERROR - Failure condition
14 Jan 2018 11:09:53,495 [Thread-8555] PostInstallCleanupTask INFO  - ------END---------Executing Post Validation Script, host: host: 172.31.20.7-----------------------

14 Jan 2018 11:09:53,506 [Thread-8555] PostInstallCleanupTask INFO  - ------START---------Cleaning up Remote Temp Folder, host: host: 172.31.20.7-----------------------

14 Jan 2018 11:09:55,354 [Thread-8555] PostInstallCleanupTask INFO  - processStreams>> exitCode: 0
14 Jan 2018 11:09:55,354 [Thread-8555] PostInstallCleanupTask INFO  - **** PASSED cleanup ****
14 Jan 2018 11:09:55,355 [Thread-8555] PostInstallCleanupTask INFO  - Cleaning up remote temp folder

The instances are rolled back before I press the rollback button on the MC .

Comments

  • Alexey,
    Check your AWS limits. To get to it, look down the left navigation pane of the EC2 dashboard: https://console.aws.amazon.com/ec2/v2/home?region=us-east-1#Limits:

    You will want to ensure that you have sufficient Elastic IP and sufficient Instance of the type you are interested in.

    Please check back if this does not cure the problem.

  • AlexeyAlexey
    edited January 2018

    From what I understand(looking at MV log),
    MC installer terminates the instances, not AWS limit termination.

    14 Jan 2018 11:11:21,477 [Thread-8555] ProvisioningThread INFO  - Vertica AWS provisioning completion check...
    14 Jan 2018 11:11:21,492 [Thread-8555] ProvisioningThread INFO  - Instances created: true
    14 Jan 2018 11:11:21,493 [Thread-8555] ProvisioningThread INFO  - Provisioning status: PROVISIONING_IN_PROGRESS
    14 Jan 2018 11:11:21,493 [Thread-8555] ProvisioningThread WARN  - Rollback and remove all instances and AWS resources.
    14 Jan 2018 11:11:21,493 [Thread-8555] ProvisioningThread INFO  - Rollback at provioning step: 3
    14 Jan 2018 11:11:21,493 [Thread-8555] ProvisioningThread INFO  - Rollback instances by IPs: [*.*.*.*, *.*.*.*, *.*.*.*]
    14 Jan 2018 11:11:21,493 [Thread-8555] AWSCloudServiceImpl INFO  - Start terminating instances for [*.*.*.*, *.*.*.*, *.*.*.*]
    14 Jan 2018 11:11:21,502 [Thread-8612] AWSManager INFO  - Received instance termination request for [*.*.*.*, *.*.*.*, *.*.*.*]
    14 Jan 2018 11:11:23,010 [Thread-8612] AWSManager INFO  - Terminate for: [i-0a*****, i-0e*****, i-0e*****]
    14 Jan 2018 11:11:35,268 [Thread-8612] AWSManager INFO  - Waiting for instance termination...
    

    I mean it does runs install_vertica successfuly:

    Creating node node0001 definition for host *.*.*.*
    ... Done
    Creating node node0002 definition for host *.*.*.*
    ... Done
    Creating node node0003 definition for host *.*.*.*
    ... Done
    
    
    
      
        Sending new cluster configuration to all nodes...
      
    
    
    Starting agent...
    
    
    
      
        Completing installation...
      
    
    
    Running upgrade logic
    

    If there were a limit issue it wouldn't be able to ssh and run install on them.

  • Hi Alexy,
    Could you please send me your MC logs? I need to get more information about the fail itself. My email is yixin.hua@microfocus.com
    Thanks,
    Michael

  • Hi Alexy,
    There is a known issue which we caught but not fixed in this release. If you have provisioned an Eon cluster with a communal storage url once, then it’ll fail to provision another cluster with the same communal storage. The reason is Vertica try to protect the communal storage being overridden by accident. The Eon revive feature will revive an Eon cluster with an existing communal storage in the next release thru MC.

    Please change the communal storage and try again. You just need to make sure the sub path is unique for the new provision. For example use S3://test-bucket/test1, if S3://test-bucket/test already used once.

    Sorry for the inconvenience. Let me know if you have any further question and problem. Thanks,
    Michael

Leave a Comment

BoldItalicStrikethroughOrdered listUnordered list
Emoji
Image
Align leftAlign centerAlign rightToggle HTML viewToggle full pageToggle lights
Drop image/file