We're Moving!

The Vertica Forum is moving to a new OpenText Analytics Database (Vertica) Community.

Join us there to post discussion topics, learn about

product releases, share tips, access the blog, and much more.

Create My New Community Account Now


HELP HELP!!!,Database with 1 or 2 nodes cannot be k-safe and it may lose data if it crashes — Vertica Forum

HELP HELP!!!,Database with 1 or 2 nodes cannot be k-safe and it may lose data if it crashes

shangshang Community Edition User
edited June 2020 in General Discussion

in centos 6.9 install Vertica 10.0

I appreciate it if anyone can resolve it!

Tagged:

Answers

  • SankarmnSankarmn Community Edition User ✭✭
  • Nimmi_guptaNimmi_gupta - Select Field - Employee

    @shang would it be possible to share the admintools.conf from /opt/vertica/conf and admintools.log from /opt/vertica/log.
    Once we have both the log we can check and provide more information about the create db failure.

  • shangshang Community Edition User

    @Sankarmn thanks for your replay.
    However, I try to install a cluster with three nodes that have the same problem~.
    and the install log as bellow~
    47.114.142.96,47.114.137.249,47.114.48.164 , this is my three Aliyun ECS server address,

    [root@iZbp1hduhk14uh847bf7ncZ ~]# rpm -Uvh vertica-10.0.0-0.x86_64.RHEL6.rpm
    Preparing... ########################################### [100%]
    1:vertica ########################################### [100%]

    Vertica Analytic Database v10.0.0-0 successfully installed on host iZbp1hduhk14uh847bf7ncZ

    To complete your NEW installation and configure the cluster, run:
    /opt/vertica/sbin/install_vertica

    To complete your Vertica UPGRADE, run:
    /opt/vertica/sbin/update_vertica


    Important

    Before upgrading Vertica, you must backup your database. After you restart your

    database after upgrading, you cannot revert to a previous Vertica software version.

    View the latest Vertica documentation at https://www.vertica.com/documentation/vertica/

    [root@iZbp1hduhk14uh847bf7ncZ ~]# /opt/vertica/sbin/install_vertica --hosts 47.114.142.96,47.114.137.249,47.114.48.164 --rpm vertica-10.0.0-0.x86_64.RHEL6.rpm
    Vertica Analytic Database 10.0.0-0 Installation Tool

    Validating options...

    Mapping hostnames in --hosts (-s) to addresses...
    Error: A cluster exists but does not match the provided --hosts
    47.114.48.164 in --hosts but not in cluster.
    47.114.137.249 in --hosts but not in cluster.
    47.114.142.96 in --hosts but not in cluster.
    127.0.0.1 in cluster but not in --hosts.
    Hint: omit --hosts for existing clusters. To change a cluster use --add-hosts or --remove-hosts.
    Installation FAILED with errors.

    Installation stopped before any changes were made.
    [root@iZbp1hduhk14uh847bf7ncZ ~]# cat /opt/vertica/config/admintools.conf
    [Configuration]
    format = 3
    install_opts = --hosts '127.0.0.1' --clean
    default_base = /home/dbadmin
    controlmode = broadcast
    controlsubnet = default
    spreadlog = False
    last_port = 5433
    tmp_dir = /tmp
    ipv6 = False
    atdebug = False
    atgui_default_license = False
    unreachable_host_caching = True
    aws_metadata_conn_timeout = 2
    rebalance_shards_timeout = 36000
    database_state_change_poll_timeout = 21600
    wait_for_shutdown_timeout = 3600
    pexpect_verbose_logging = False
    sync_catalog_retries = 2000
    admintools_config_version = 109

    [Cluster]
    hosts = 127.0.0.1

    [Nodes]
    node0001 = 127.0.0.1,/home/dbadmin,/home/dbadmin
    v_test1_node0001 = 127.0.0.1,/home/dbadmin,/home/dbadmin

    [SSHConfig]
    ssh_user =
    ssh_ident =
    ssh_options = -oConnectTimeout=30 -o TCPKeepAlive=no -o ServerAliveInterval=15 -o ServerAliveCountMax=2 -o StrictHostKeyChecking=no -o BatchMode=yes

    [BootstrapParameters]
    awsendpoint = null
    awsregion = null

    [Database:test1]
    restartpolicy = ksafe
    port = 5433
    path = /home/dbadmin/test1
    nodes = v_test1_node0001
    is_eon_mode = False
    depot_base_dir = None
    depot_size = None
    communal_storage_url = None
    num_shards = None
    is_first_start_after_revive = False
    branch_name =

    thanks again!

  • shangshang Community Edition User

    @Sankarmn thanks for your reply.
    However,this problem still unsolve when I change to three nodes with a cluster.
    and the log as bellow~

    note: this 47.114.xxx.xxx,47.114.xxx.xxx,47.114.xxx.xxx is my three cloud server address,

    [root@iZbZ ~]# rpm -Uvh vertica-10.0.0-0.x86_64.RHEL6.rpm
    Preparing... ########################################### [100%]
    1:vertica ########################################### [100%]

    Vertica Analytic Database v10.0.0-0 successfully installed on host iZbp1hduhk14uh847bf7ncZ

    To complete your NEW installation and configure the cluster, run:
    /opt/vertica/sbin/install_vertica

    To complete your Vertica UPGRADE, run:
    /opt/vertica/sbin/update_vertica


    Important

    Before upgrading Vertica, you must backup your database. After you restart your

    database after upgrading, you cannot revert to a previous Vertica software version.

    View the latest Vertica documentation at https://www.vertica.com/documentation/vertica/

    [root@iZbZ ~]# /opt/vertica/sbin/install_vertica --hosts 47.114.xxx.xxx,47.114.xxx.xxx,47.114.xxx.xxx --rpm vertica-10.0.0-0.x86_64.RHEL6.rpm
    Vertica Analytic Database 10.0.0-0 Installation Tool

    Validating options...

    Mapping hostnames in --hosts (-s) to addresses...
    Error: A cluster exists but does not match the provided --hosts
    47.114.xxx.xxx in --hosts but not in cluster.
    47.114.xxx.xxx in --hosts but not in cluster.
    47.114.xxx.xxx in --hosts but not in cluster.
    127.0.0.1 in cluster but not in --hosts.
    Hint: omit --hosts for existing clusters. To change a cluster use --add-hosts or --remove-hosts.
    Installation FAILED with errors.

    Installation stopped before any changes were made.
    [root@iZbZ ~]# cat /opt/vertica/config/admintools.conf
    [Configuration]
    format = 3
    install_opts = --hosts '127.0.0.1' --clean
    default_base = /home/dbadmin
    controlmode = broadcast
    controlsubnet = default
    spreadlog = False
    last_port = 5433
    tmp_dir = /tmp
    ipv6 = False
    atdebug = False
    atgui_default_license = False
    unreachable_host_caching = True
    aws_metadata_conn_timeout = 2
    rebalance_shards_timeout = 36000
    database_state_change_poll_timeout = 21600
    wait_for_shutdown_timeout = 3600
    pexpect_verbose_logging = False
    sync_catalog_retries = 2000
    admintools_config_version = 109

    [Cluster]
    hosts = 127.0.0.1

    [Nodes]
    node0001 = 127.0.0.1,/home/dbadmin,/home/dbadmin
    v_test1_node0001 = 127.0.0.1,/home/dbadmin,/home/dbadmin

    [SSHConfig]
    ssh_user =
    ssh_ident =
    ssh_options = -oConnectTimeout=30 -o TCPKeepAlive=no -o ServerAliveInterval=15 -o ServerAliveCountMax=2 -o StrictHostKeyChecking=no -o BatchMode=yes

    [BootstrapParameters]
    awsendpoint = null
    awsregion = null

    [Database:test1]
    restartpolicy = ksafe
    port = 5433
    path = /home/dbadmin/test1
    nodes = v_test1_node0001
    is_eon_mode = False
    depot_base_dir = None
    depot_size = None
    communal_storage_url = None
    num_shards = None
    is_first_start_after_revive = False
    branch_name =

    thanks again~

  • shangshang Community Edition User

    @Nimmi_gupta said:
    @shang would it be possible to share the admintools.conf from /opt/vertica/conf and admintools.log from /opt/vertica/log.
    Once we have both the log we can check and provide more information about the create db failure.

    @Nimmi_gupta
    appreciate it for your reply
    bellow is my admintools.log and Vertica installing log

    [dbadmin@ncZ ~]$ admintools -t create_db -x auth_params.conf --communal-storage-location=oss://xianan-bucket --depot-path=/home/dbadmin/depot/ --shard-count=4 -s 47.114.xxx.xxx,47.114.xxx.xxx -d DBTest2
    Info: no password specified, using none
    Default depot size in use
    Database with 1 or 2 nodes cannot be k-safe and it may lose data if it crashes

    Error: These nodes/hosts do not appear to be part of this cluster:
    47.114.xxx.xxx, 47.114.xxx.xxx
    Hint: Valid inputs are IP addresses, node names, or hostnames.
    Hostnames must resolve to a single address within the target family.
    Hostnames and addresses must be in the IPv4 address family.
    [dbadmin@ncZ ~]$ cat /opt/vertica/config/admintools.conf
    [Configuration]
    format = 3
    install_opts = --hosts '127.0.0.1' --clean
    default_base = /home/dbadmin
    controlmode = broadcast
    controlsubnet = default
    spreadlog = False
    last_port = 5433
    tmp_dir = /tmp
    ipv6 = False
    atdebug = False
    atgui_default_license = False
    unreachable_host_caching = True
    aws_metadata_conn_timeout = 2
    rebalance_shards_timeout = 36000
    database_state_change_poll_timeout = 21600
    wait_for_shutdown_timeout = 3600
    pexpect_verbose_logging = False
    sync_catalog_retries = 2000
    admintools_config_version = 109

    [Cluster]
    hosts = 127.0.0.1

    [Nodes]
    node0001 = 127.0.0.1,/home/dbadmin,/home/dbadmin
    v_test1_node0001 = 127.0.0.1,/home/dbadmin,/home/dbadmin

    [SSHConfig]
    ssh_user =
    ssh_ident =
    ssh_options = -oConnectTimeout=30 -o TCPKeepAlive=no -o ServerAliveInterval=15 -o ServerAliveCountMax=2 -o StrictHostKeyChecking=no -o BatchMode=yes

    [BootstrapParameters]
    awsendpoint = null
    awsregion = null

    [Database:test1]
    restartpolicy = ksafe
    port = 5433
    path = /home/dbadmin/test1
    nodes = v_test1_node0001
    is_eon_mode = False
    depot_base_dir = None
    depot_size = None
    communal_storage_url = None
    num_shards = None
    is_first_start_after_revive = False
    branch_name =

    thanks again for your reply~

  • Bryan_HBryan_H Vertica Employee Administrator

    It looks like you have already created a single-node cluster with "test1" database.
    I think this is the quickest solution:
    Add new hosts with -A / --add-hosts option (replace -s with -A) then create the new database with admintools -t create_db

  • Bryan_HBryan_H Vertica Employee Administrator

    Alternately, you may need to remove the test1 database so you can remove the 127.0.0.1 node definition attached to test1 by running "admintools -t drop_db"
    Then you can reset the nodes with install_vertica -s --clean where --clean will remove the old node list including 127.0.0.1 and replace with new list from -s option.

  • shangshang Community Edition User

    @Bryan_H said:
    It looks like you have already created a single-node cluster with "test1" database.
    I think this is the quickest solution:
    Add new hosts with -A / --add-hosts option (replace -s with -A) then create the new database with admintools -t create_db

    thanks for your reply.
    now I have already dropped the 'test1' database. However, another error still unsolved and the log as below;
    ==================console log============================
    [root@iZ~]# /opt/vertica/sbin/install_vertica -A 47.114.xxx.xxx
    Vertica Analytic Database 10.0.0-0 Installation Tool

    Validating options...

    Mapping hostnames in --add-hosts (-A) to addresses...

    Error: Existing single-node localhost (loopback) cluster cannot be expanded
    Hint: Move cluster to external address first. See online documentation.
    Installation FAILED with errors.

    Installation stopped before any changes were made.

    ========admintools.conf=============
    [root@iZb ~]# cat /opt/vertica/config/admintools.conf
    [SSHConfig]
    ssh_user =
    ssh_ident =
    ssh_options = -oConnectTimeout=30 -o TCPKeepAlive=no -o ServerAliveInterval=15 -o ServerAliveCountMax=2 -o StrictHostKeyChecking=no -o BatchMode=yes

    [BootstrapParameters]
    awsendpoint = null
    awsregion = null

    [Configuration]
    format = 3
    install_opts = -s '127.0.0.1' --clean
    default_base = /home/dbadmin
    controlmode = broadcast
    controlsubnet = default
    spreadlog = False
    last_port = 5433
    tmp_dir = /tmp
    ipv6 = False
    atdebug = False
    atgui_default_license = False
    unreachable_host_caching = True
    aws_metadata_conn_timeout = 2
    rebalance_shards_timeout = 36000
    database_state_change_poll_timeout = 21600
    wait_for_shutdown_timeout = 3600
    pexpect_verbose_logging = False
    sync_catalog_retries = 2000
    admintools_config_version = 109

    [Cluster]
    hosts = 127.0.0.1

    [Nodes]
    node0001 = 127.0.0.1,/home/dbadmin,/home/dbadmin

    thanks for your help again~

  • shangshang Community Edition User

    @Bryan_H said:
    Alternately, you may need to remove the test1 database so you can remove the 127.0.0.1 node definition attached to test1 by running "admintools -t drop_db"
    Then you can reset the nodes with install_vertica -s --clean where --clean will remove the old node list including 127.0.0.1 and replace with new list from -s option.

    Hi @Bryan_H
    thanks for your reply.
    I have already deleted the database, but I want to know what's mean "nstall_vertica -s --clean where --clean will remove the old node list including 127.0.0.1 and replace with new list from -s option." cause I got another error when I operator as you say. And console output and log as below.

    [root@iZbp ~]# /opt/vertica/sbin/install_vertica --clean
    Vertica Analytic Database 10.0.0-0 Installation Tool

    Validating options...

    Error: No machines will be included in the cluster!
    Hint: provide --hosts.
    Installation FAILED with errors.

    Installation stopped before any changes were made.
    [root@iZbZ ~]# /opt/vertica/sbin/install_vertica -s 47.114.xxx.xxx,47.114.xxx.xxx,47.114.xxx.xxx --rpm vertica-10.0.0-0.x86_64.RHEL6.rpm
    Vertica Analytic Database 10.0.0-0 Installation Tool

    Validating options...

    Mapping hostnames in --hosts (-s) to addresses...
    Error: A cluster exists but does not match the provided --hosts
    47.114.xxx.xxx in --hosts but not in cluster.
    47.114.xxx.xxx in --hosts but not in cluster.
    47.114.xxxx.xxx in --hosts but not in cluster.
    127.0.0.1 in cluster but not in --hosts.
    Hint: omit --hosts for existing clusters. To change a cluster use --add-hosts or --remove-hosts.
    Installation FAILED with errors.

    Installation stopped before any changes were made.

    Thanks for your support again and let me know if you need additional information.

  • LenoyJLenoyJ - Select Field - Employee
    edited June 2020

    You need to specify -s parameter if you are using --clean parameter
    If this is a cloud/virtualized environment, try this:

    sudo /opt/vertica/sbin/install_vertica -s 47.114.xxx.xxx,47.114.xxx.xxx,47.114.xxx.xxx --clean --rpm vertica-10.0.0-0.x86_64.RHEL6.rpm  ‑‑point-to-point
    

    If this is not a cloud/virtualized environment, remove the --point-to-point flag.

  • shangshang Community Edition User

    @LenoyJ said:
    You need to specify -s parameter if you are using --clean parameter
    If this is a cloud/virtualized environment, try this:

    sudo /opt/vertica/sbin/install_vertica -s 47.114.xxx.xxx,47.114.xxx.xxx,47.114.xxx.xxx --clean --rpm vertica-10.0.0-0.x86_64.RHEL6.rpm ‑‑point-to-point

    If this is not a cloud/virtualized environment, remove the --point-to-point flag.

    thanks for your reply. However, I got this error too as below.
    (note this operator all in cloud server)
    [root@iZcZ ~]# /opt/vertica/sbin/install_vertica -s '47.114.xxx.xxx,47.114.xxx.xxx,47.114.xxx.xxx' --clean --rpm vertica-10.0.0-0.x86_64.RHEL6.rpm --point-to-point
    Vertica Analytic Database 10.0.0-0 Installation Tool

    Validating options...

    Mapping hostnames in --hosts (-s) to addresses...
    Error: cannot find which cluster host is the local host.
    Hint: Is this node in the cluster? Did its IP address change?
    Installation FAILED with errors.

    Installation stopped before any changes were made.

    [root@iZbZ ~]# cat /opt/vertica/config/admintools.conf
    [SSHConfig]
    ssh_user =
    ssh_ident =
    ssh_options = -oConnectTimeout=30 -o TCPKeepAlive=no -o ServerAliveInterval=15 -o ServerAliveCountMax=2 -o StrictHostKeyChecking=no -o BatchMode=yes

    [BootstrapParameters]
    awsendpoint = null
    awsregion = null

    [Configuration]
    format = 3
    install_opts = -s '127.0.0.1' --clean
    default_base = /home/dbadmin
    controlmode = broadcast
    controlsubnet = default
    spreadlog = False
    last_port = 5433
    tmp_dir = /tmp
    ipv6 = False
    atdebug = False
    atgui_default_license = False
    unreachable_host_caching = True
    aws_metadata_conn_timeout = 2
    rebalance_shards_timeout = 36000
    database_state_change_poll_timeout = 21600
    wait_for_shutdown_timeout = 3600
    pexpect_verbose_logging = False
    sync_catalog_retries = 2000
    admintools_config_version = 109

    [Cluster]
    hosts = 127.0.0.1

    [Nodes]
    node0001 = 127.0.0.1,/home/dbadmin,/home/dbadmin

    appreciate it for your help~

  • LenoyJLenoyJ - Select Field - Employee

    It looks like you created a cluster once before with localhost 127.0.0.1.
    Try renaming /opt/vertica/config/admintools.conf to something else. Do this on all 3 nodes:

    mv /opt/vertica/config/admintools.conf /opt/vertica/config/admintools.conf.bk
    

    Then, on any one node, try again:

    sudo /opt/vertica/sbin/install_vertica -s 47.114.xxx.xxx,47.114.xxx.xxx,47.114.xxx.xxx --clean --rpm vertica-10.0.0-0.x86_64.RHEL6.rpm ‑‑point-to-point
    
  • shangshang Community Edition User

    Hi ~
    thanks for everyone~
    now I have already installed a Vertica cluster with three hosts' success.
    However, I got another error when I want to create an EON mode database.
    Can Vertica EON mode support other Object Storage Service which not AWS or Auxier?

    [dbadmin@iZcZ ~]$ admintools -t create_db -x auth_params.conf -s '172.16.xxx.xxx,172.16.xxx.xxx,172.16.xxx.xxx' -d VMart3 -p 123 --depot-path=/home/dbadmin/depot --shard-count=6 --communal-storage-location=oss://xxxx-bucket -D /home/dbadmin/data/ -c /home/dbadmin/catalog/ --depot-size 5G
    Distributing changes to cluster.
    Creating database VMart3
    Bootstrap on host 172.16.xxx.66 return code -6 stdout '' stderr ''

    Error: Bootstrap on host 172.16.xxx.66 return code -6 stdout '' stderr ''

    thanks for friends help!

  • shangshang Community Edition User

    @LenoyJ said:
    It looks like you created a cluster once before with localhost 127.0.0.1.
    Try renaming /opt/vertica/config/admintools.conf to something else. Do this on all 3 nodes:

    mv /opt/vertica/config/admintools.conf /opt/vertica/config/admintools.conf.bk

    Then, on any one node, try again:

    sudo /opt/vertica/sbin/install_vertica -s 47.114.xxx.xxx,47.114.xxx.xxx,47.114.xxx.xxx --clean --rpm vertica-10.0.0-0.x86_64.RHEL6.rpm ‑‑point-to-point

    Hi @LenoyJ ~
    thanks for your reply, I have already solved Vertica install problems.
    unfortunate I got new error now when I try to create an EON mode database like pre message, would appreciate it if you can give me some suggestion about it.

    thanks again
    Best wish~

  • LenoyJLenoyJ - Select Field - Employee
    edited June 2020

    @shang said:
    Can Vertica EON mode support other Object Storage Service which not AWS or Auxier?

    I doubt AliCloud's OSS (oss://) would work as a supported communal storage location for Eon yet (as of 10.0). Supported Eon mode locations in the cloud today are AWS & GCP. On-prem Eon mode deployments includes Pure Storage, MinIO & HDFS. Though you could theoretically deploy MinIO and HDFS on the cloud.
     
    Enterprise Mode, on the other hand, does not have a requirement of a communal storage and should work on any commodity/virtualized hardware that meets our minimum requirements. It looks like you are trying out Vertica and deploying a few nodes, maybe try Enterprise instead for now? Like so:

    admintools -t create_db -d VMart3 -p 123 -s 172.16.xxx.xxx,172.16.xxx.xxx,172.16.xxx.xxx -D /home/dbadmin/data/ -c /home/dbadmin/catalog/
    
  • shangshang Community Edition User

    @LenoyJ said:

    @shang said:
    Can Vertica EON mode support other Object Storage Service which not AWS or Auxier?

    I doubt AliCloud's OSS (oss://) would work as a supported communal storage location for Eon yet (as of 10.0). Supported Eon mode locations in the cloud today are AWS & GCP. On-prem Eon mode deployments includes Pure Storage, MinIO & HDFS. Though you could theoretically deploy MinIO and HDFS on the cloud.
     
    Enterprise Mode, on the other hand, does not have a requirement of a communal storage and should work on any commodity/virtualized hardware that meets our minimum requirements. It looks like you are trying out Vertica and deploying a few nodes, maybe try Enterprise instead for now? Like so:

    admintools -t create_db -d VMart3 -p 123 -s 172.16.xxx.xxx,172.16.xxx.xxx,172.16.xxx.xxx -D /home/dbadmin/data/ -c /home/dbadmin/catalog/

    Hi @LenoyJ
    Thanks for your reply.
    unfortunately, my work is testing AliCloud's OSS can or cannot work as communal storage for EON, I can't change to use Enterprise Mode.
    so can you give me some advice about this problem please?
    thanks again.

  • Bryan_HBryan_H Vertica Employee Administrator

    Does Alicloud OSS offer a connection using S3 protocol? If so, then you may be able to connect using equivalent s3:// URL.
    Please see Alicloud documentation for S3 compatibility information: https://www.alibabacloud.com/help/doc-detail/64919.html?spm=a2c5t.11065259.1996646101.searchclickresult.2c434c0d8ByRNe
    Unfortunately, we don't officially support other protocols.

  • shangshang Community Edition User

    @Bryan_H said:
    Does Alicloud OSS offer a connection using S3 protocol? If so, then you may be able to connect using equivalent s3:// URL.
    Please see Alicloud documentation for S3 compatibility information: https://www.alibabacloud.com/help/doc-detail/64919.html?spm=a2c5t.11065259.1996646101.searchclickresult.2c434c0d8ByRNe
    Unfortunately, we don't officially support other protocols.

    thanks for your reply.
    However, I got an error when I change to S3 as communal storage to create an EON mode database.
    the log as below.

    appreciate it if you can give me some suggestion about this error.

    [dbadmin@iZbp1hduhk14uh847bf7ncZ ~]$ admintools -t create_db -x auth_params.conf -s '172.16.xxx.a,172.16.xxx.b,172.16.xxx.c' -d VMart1 -p 123 --depot-path=/home/dbadmin/depot --shard-count=6 --communal-storage-location=s3://vertica-test-1 -D /home/dbadmin/data/ -c /home/dbadmin/catalog/ --depot-size 5G
    Distributing changes to cluster.
    Creating database VMart1
    Starting bootstrap node v_vmart1_node0002 (172.16.xxx.b)
    Starting nodes:
    v_vmart1_node0002 (172.16.xxx.b)
    Starting Vertica on all nodes. Please wait, databases with a large catalog may take a while to initialize.
    Node Status: v_vmart1_node0002: (DOWN)
    Node Status: v_vmart1_node0002: (DOWN)
    Node Status: v_vmart1_node0002: (DOWN)
    Node Status: v_vmart1_node0002: (DOWN)
    Node Status: v_vmart1_node0002: (DOWN)
    Node Status: v_vmart1_node0002: (UP)
    Creating database nodes
    Creating node v_vmart1_node0001 (host 172.16.xxx.a)
    Creating node v_vmart1_node0003 (host 172.16.xxx.c)
    Generating new configuration information
    Stopping single node db before adding additional nodes.
    Database shutdown complete
    Starting all nodes
    Start hosts = ['172.16.xxx.xx', '172.16.xxx.xx', '172.16.xxx.xx']
    Starting nodes:
    v_vmart1_node0002 (172.16.xxx.xx)
    v_vmart1_node0001 (172.16.xxx.xx)
    v_vmart1_node0003 (172.16.xxx.xx)
    Starting Vertica on all nodes. Please wait, databases with a large catalog may take a while to initialize.
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    It is suggested that you continue waiting.
    Do you want to continue waiting? (yes/no) [yes]
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    It is suggested that you continue waiting.
    Do you want to continue waiting? (yes/no) [yes]
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    It is suggested that you continue waiting.
    Do you want to continue waiting? (yes/no) [yes]
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    It is suggested that you continue waiting.
    Do you want to continue waiting? (yes/no) [yes]
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    Node Status: v_vmart1_node0001: (DOWN) v_vmart1_node0002: (DOWN) v_vmart1_node0003: (DOWN)
    ERROR: Not all nodes came up, but not all down. Run scrutinize.
    Unable to establish vsql script connection: Unable to connect to 'VMart1'
    Unable to establish client-server connection: Unable to connect to 'VMart1'
    Unable to create depot storage locations (if Eon) without a client-server connection.
    Unable to rebalance shards (if Eon) without a client-server connection.
    Unable to set K-safety value without a client-server connection.
    Unable to install default extension packages without a vsql script connection
    Unable to sync database catalog (if Eon) without a client-server connection.
    Database creation SQL tasks included one or more failures (see above).
    Database VMart1 created successfully, some nodes may have had startup problems.

    thanks again for your attention~

Leave a Comment

BoldItalicStrikethroughOrdered listUnordered list
Emoji
Image
Align leftAlign centerAlign rightToggle HTML viewToggle full pageToggle lights
Drop image/file