Please take this survey to help us learn more about how you use third party tools. Your input is greatly appreciated!

Is there a way to run the Vertica CE docker image as a Pod in k8s?

Hi!

My company doesn't use Vertica, but we have a feature that lets users connect to it. So we have some automated tests that test the connection between our backend and a Vertica docker container. Right now we're just running it with docker-compose and it works out of the box with this snippet:

  test-vertica:
    image: 'vertica/vertica-ce:11.0.0-0'
    networks:
      - connector-network-test

Now we want to move these tests onto k8s, as in just spin up a deployment with a service for Vertica before the tests run, and spin up another Pod to act as the backend that will run the tests and hit the Vertica deployment, and scrap everything when we're done. This is supposed to be super light weight so I don't really want to use the vertica helm chart for this, if I can just get it working on k8s with that image that just worked out of the box for docker-compose that would be perfect. Running the Vertica deployment in privileged mode worked fine until the last bit of the docker_entrypoint.sh:

...
Time: First fetch (1 row): 54.396 ms. All rows formatted: 54.425 ms
 Rows Loaded 
-------------
         200
(1 row)
Time: First fetch (1 row): 21.701 ms. All rows formatted: 21.742 ms
vsql:vmart_load_data.sql:37: 
server closed the connection unexpectedly
    This probably means the server terminated abnormally
    before or while processing the request.
vsql:vmart_load_data.sql:37: connection to server was lost
Running ETL ...
vsql: could not connect to server: Connection refused
    Is the server running on host "???" and accepting
    TCP/IP connections on port 5433?
Confirm successful load
vsql: could not connect to server: Connection refused
    Is the server running on host "???" and accepting
    TCP/IP connections on port 5433?
Starting MC agent
Vertica is now running

This is the relevant part of my Vertica k8s deployment:

...
  Containers:
   test-vertica:
    Image:      vertica/vertica-ce:11.0.0-0
    Port:       5433/TCP
    Host Port:  0/TCP
    Limits:
      cpu:     1
      memory:  512Mi
    Requests:
      cpu:     10m
      memory:  90Mi
    Environment:
      POD_HOST_IP:             (v1:status.hostIP)
      POD_IP:                  (v1:status.podIP)
      POD_NAME:                (v1:metadata.name)
      POD_NAMESPACE:           (v1:metadata.namespace)
      POD_NODE_NAME:           (v1:spec.nodeName)
      POD_SERVICE_ACCOUNT:     (v1:spec.serviceAccountName)
      POD_UID:                 (v1:metadata.uid)
    Mounts:                   <none>
  Volumes:                    <none>

I checked the config files in the Vertica container in both the k8s deployment and when it runs in docker-compose, and they look the same, so I'm not sure why it doesn't work when it's running as a Pod on k8s. Any help is appreciated! Thanks in advance.

Anna

Tagged:

Best Answer

  • Accepted Answer

    update: turns out this is a resource issue! Looks like the vertica ce container needs 2g memory to start because it has some hefty sample data sets.

Answers

  • Hi Anna,

    I would like to understand your use case in K8s a little better. Could you also provide the sample yaml file you are using to achieve this? The way we have built the Vertica-K8s Image to be able to run in K8s is little different than the One-node Docker Image. For example, in K8s, we only support EON mode based cluster that uses local PVs for temp/catalog storage and an Obj store as data storage. Whereas, the one node CE uses Docker-volumes/bind-mounts for local temp/catalog and data storage. So two options, off the top of my head are: 1) Use the new Vertica-operator https://github.com/vertica/vertica-kubernetes (It is pretty simple and quick to setup) or 2) You could try to use the vertica-K8s image that the operator uses and try to launch the same way. But for this you will have to make sure you have right PVCs (or storage class) in your manifest and load the required data afterwards. Disclaimer for 2) though -- it is not a supported config and untested.

    Thanks,
    Naren

  • Hi Naren,

    Thanks for getting back to me! I really just want to run the Vertica docker image as a one node thing with no PVC, as basic as possible (so that it can be set up and torn down repeatedly and easily). The only requirements I have are 1. it initializes correctly and 2. it can respond to pings on port 5433. I guess if there's a way for this deployment to spin up the container just like how docker-compose would, that would be perfect for me.

    This is the yaml I'm using:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        module: test-vertica
        service: test-vertica
      name: test-vertica
      namespace: default
    spec:
      progressDeadlineSeconds: 600
      replicas: 1
      revisionHistoryLimit: 3
      selector:
        matchLabels:
          service: test-vertica
      strategy:
        rollingUpdate:
          maxSurge: 1
          maxUnavailable: 1
        type: RollingUpdate
      template:
        metadata:
          labels:
            module: test-vertica
            service: test-vertica
        spec:
          containers:
          - env:
            - name: POD_HOST_IP
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: status.hostIP
            - name: POD_IP
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: status.podIP
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            - name: POD_NODE_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: spec.nodeName
            - name: POD_SERVICE_ACCOUNT
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: spec.serviceAccountName
            - name: POD_UID
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.uid
            image: vertica/vertica-ce:11.0.0-0
            imagePullPolicy: IfNotPresent
            name: test-vertica
            ports:
            - containerPort: 5433
              name: http
              protocol: TCP
            resources:
              limits:
                cpu: "1"
                memory: 512Mi
              requests:
                cpu: 10m
                memory: 90Mi
            securityContext:
              allowPrivilegeEscalation: true
              privileged: true
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
          dnsPolicy: ClusterFirst
          restartPolicy: Always
          schedulerName: default-scheduler
          securityContext: {}
          terminationGracePeriodSeconds: 5
    
    
  • SruthiASruthiA Employee

    @annayan1 If you are looking for a single node, then I think single node vertica docker community edition can help you. Could you please take a look at the below link

    https://hub.docker.com/r/vertica/vertica-ce

  • Hi @SruthiA, thanks, but I'm already using that image (as you can see in the k8s deployment yaml I pasted). And even though that image works in docker-compose out of the box, in k8s it gives me the error I pasted in my original post

  • @annayan1 Vertica is meant to be a stateful pod when you deploy it in K8s since it is a database and not a stateless application. docker-compose uses docker-volumes underneath, typically /var/lib/docker/volumes//_data . So, I believe you will require to set up a pvc/pv or assign a storage class to use for the deployment. You can do this by defining the _volumes_ in your deployment spec and then referencing that_ volumeMounts_ inside the container. This is where Vertica pod will attempt to persist the data/catalog. You can use local-path/local-volume provisioner as a simple way of achieving this.

  • SruthiASruthiA Employee

    @annayan1 : I used the yaml file you provided and was able to create deployment on k8's. I am login to the VMart database successfully.

    Events:
    Type Reason Age From Message
    ---- ------ ---- ---- -------
    Normal Scheduled 22m default-scheduler Successfully assigned default/test-vertica-7647c565b4-4jj6b to sruthk82
    Normal Pulling 21m kubelet Pulling image "vertica/vertica-ce:11.0.0-0"
    Normal Pulled 10m kubelet Successfully pulled image "vertica/vertica-ce:11.0.0-0" in 11m28.06215534s
    Normal Created 10m kubelet Created container test-vertica
    Normal Started 10m kubelet Started container test-vertica
    [[email protected] ~]$

    kubectl get po |grep test
    test-vertica-7647c565b4-4jj6b 1/1 Running 0 23m
    [[email protected] ~]$

    [[email protected] ~]$ admintools -t list_allnodes
    Node | Host | State | Version | DB
    ------------------+-----------+-------+------------------+-------
    v_vmart_node0001 | 127.0.0.1 | UP | vertica-11.0.0.0 | VMart

    [[email protected] ~]$
    Welcome to vsql, the Vertica Analytic Database interactive terminal.

    Type: \h or \? for help with vsql commands
    \g or terminate with semicolon to execute query
    \q to quit

    [email protected]()=> select version();

    version

    Vertica Analytic Database v11.0.0-0
    (1 row)

  • wow @SruthiA , thanks for trying that. But I'm still getting the same error - both at the end of the docker_entrypoint.sh run and also when I try to run vsql inside the container, I get:

    vsql: could not connect to server: Cannot assign requested address
        Is the server running on host "???" and accepting
        TCP/IP connections on port 5433?
    

    Did you just copy-paste my yaml into a file and ran kubectl create -f? Which k8s cluster are you trying this on? I've tried it on both AKS and EKS and neither works.

  • SruthiASruthiA Employee

    @annayan1 : I used the same yaml file you provided. Please find it below. I used kubectl deploy -f forumdocker.yaml .

    I created k8's cluster locally using open source kubernetes.

    https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/

    [[email protected] ~]$ cat forumdocker.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    labels:
    module: test-vertica
    service: test-vertica
    name: test-vertica
    namespace: default
    spec:
    progressDeadlineSeconds: 600
    replicas: 1
    revisionHistoryLimit: 3
    selector:
    matchLabels:
    service: test-vertica
    strategy:
    rollingUpdate:
    maxSurge: 1
    maxUnavailable: 1
    type: RollingUpdate
    template:
    metadata:
    labels:
    module: test-vertica
    service: test-vertica
    spec:
    containers:
    - env:
    - name: POD_HOST_IP
    valueFrom:
    fieldRef:
    apiVersion: v1
    fieldPath: status.hostIP
    - name: POD_IP
    valueFrom:
    fieldRef:
    apiVersion: v1
    fieldPath: status.podIP
    - name: POD_NAME
    valueFrom:
    fieldRef:
    apiVersion: v1
    fieldPath: metadata.name
    - name: POD_NAMESPACE
    valueFrom:
    fieldRef:
    apiVersion: v1
    fieldPath: metadata.namespace
    - name: POD_NODE_NAME
    valueFrom:
    fieldRef:
    apiVersion: v1
    fieldPath: spec.nodeName
    - name: POD_SERVICE_ACCOUNT
    valueFrom:
    fieldRef:
    apiVersion: v1
    fieldPath: spec.serviceAccountName
    - name: POD_UID
    valueFrom:
    fieldRef:
    apiVersion: v1
    fieldPath: metadata.uid
    image: vertica/vertica-ce:11.0.0-0
    imagePullPolicy: IfNotPresent
    name: test-vertica
    ports:
    - containerPort: 5433
    name: http
    protocol: TCP
    resources:
    limits:
    cpu: "1"
    memory: 512Mi
    requests:
    cpu: 10m
    memory: 90Mi
    securityContext:
    allowPrivilegeEscalation: true
    privileged: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    dnsPolicy: ClusterFirst
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    terminationGracePeriodSeconds: 5

  • @SruthiA : interesting, when I try this locally I get the same errors. There must be some default settings different in our setups. But what could it be...

  • SruthiASruthiA Employee

    @annayan1 : As @nprabhu2195 mentioned, you might be missing some settings or configuration. Please open a support case as it might require webex.

Leave a Comment

BoldItalicStrikethroughOrdered listUnordered list
Emoji
Image
Align leftAlign centerAlign rightToggle HTML viewToggle full pageToggle lights
Drop image/file

Can't find what you're looking for? Search the Vertica Documentation, Knowledge Base, or Blog for more information.