From Davids EON webcast yesterday I have a question on slide 68 (snippet attached)
This slide on on Elastic Throughput Scaling contains a statement
"Each node can run 3 concurrent queries"
Can you please explain this in detail?
This is a hypothetical example to illustrate how Elastic Throughput Scaling (ETS) works. Each node's physical resources support running a certain number of concurrent queries (usually more than 3!). Concurrency is tunable through Resource Pools (think PLANNEDCONCONCURRENCY / MAXCONCURRENCY). For the purposes of this example, we simplified resource allocation to "slots" - each query takes one "slot."
In order for a query to run, it must acquire resources on every node participating in the query. For a 3 shard database, this usually means 3 nodes must participate. So the aggregate cost of a query across the cluster is 3 slots. With a 3 node 3 slot-per-node deployment, we have 9 total slots and thus could achieve a parallelism of 3 queries. With a 4 node database we'd have, in aggregate, 12 slots, so 3 per query suggests 4 concurrent queries is possible. ETS achieves this by randomly load balancing between the nodes.
In practice, the situation is more messy - queries take actual resources like memory instead of "slots", load balancing isn't perfect, etc. But you hopefully get the idea of how we can add 1 node to a 3 node database and observe a 30% improvement in qury throughput.
Can't find what you're looking for? Search the Vertica Documentation, Knowledge Base, or Blog for more information.