Scalability Question. More RAM to existing nodes, or adding additional nodes?

Right now we have 5 nodes with 144GB RAM each (possible to go to 384GB per node). The price point to go to 192GB RAM + 2 additional nodes is cheaper than 5 nodes with 384GB RAM. Which configuration would Vertica scale towards better?


  • I would suggest two additional nodes. Having two additional nodes scales not only processing but additional memory and disk. This also means workload distribution. 
    Although having more memory per node seems like an enticing propostion. But Vertica's memory recommendations are based on number of CPU cores available to node. Number of CPUs per node decide the concurrency you can have. So even with additional memory, the concurrency limits per node will still be same. Just that each concurrent query will have more memory to it's disposal. Vertica recommends a minimum of 4GB per core, though typical configurations have 6-8GB per core. So if the existing nodes meets the memory recommendation, scaling the cluster node wise will be a better option.

  • Thanks for the feedback Pravesh. We're finding we're getting more and more memory issues mostly failing during merge commands and lack of memory in the resource pool. We have 12 cores w/ HT (total 24 cores) systems with 144GB, which is 6GB per core. If we push to 192GB that would be 8GB per core. I'll do some testing to see if an additional server or two will help us.

Leave a Comment

BoldItalicStrikethroughOrdered listUnordered list
Align leftAlign centerAlign rightToggle HTML viewToggle full pageToggle lights
Drop image/file