understanding Maxconcurrency and Parallelism

Hello everyone, in the understanding of having a cluster with 5 nodes and 128 cores each, if I have 3 pools, by assigning a value of 40 to each one in Maxconcurrency we are containing the number of cores it uses. How does the role of Parallelism play? Should it be the same as the number of cores? Or what would be the best practice to ensure exclusive resources to each pool? or limit certain pools from interfering to low latency pools?

Best Answer

  • Vertica_CurtisVertica_Curtis Employee
    Answer ✓

    We're probably over-thinking this.

    ExecutionParallelism defaults to the # of cores. It is the maximum number of parallel threads that a query can use while executing. Setting it lower can restrict the number of parallel threads. The number of parallel threads required by a query is going to vary greatly from query to query and will be based on how many kinds of operations are required to run that query. In general, I don't expect you're going to find any value in adjusting the setting of ExecutionParallelism. That's a pretty edge-case setting. I would recommend just leaving it as the default.

    The number of nodes in this equation is also largely irrelevant. There's no "# of nodes" involved in any of this math. Anything that a node does, all other nodes do as well. A query gets distributed across the nodes, and each node processes against its own local data independently of other nodes, but when thinking about parallelism, or concurrency, or any setting in a resource pool, the # of nodes is irrelevant. It doesn't matter if you have 5 nodes, or 50. The values can be the same.

    MaxConcurrency is queries. That's it. It has nothing to do with cores, or nodes.

    If you want to prioritize a set of queries in a resource pool, I would simply set the RunTimePriority to MEDIUM for that pool, and the other pools to LOW. That will give them priority. Though, if too many of those requests are being ran, LOW priority pools could get starved. If that happens, you could set a MaxConcurrency on that pool such that you cap the # of requests in order not to starve the system of resources.

    Setting resource pool values is a little bit of trial and error. Tweak it until you get the desired effect. Mostly this will involve toggling RunTimePriority, MaxConcurrency, and PlannedConcurrency. The last one acts as a budget calculator. It, too, defaults to the number of cores. But if you start to se requests get canceled or "out of memory" errors, double the PlannedConcurrency value. That will halve the query budget per query.

Answers

  • Wow, 128 cores is a lot!

    The MaxConcurrency value caps the number of queries in that pool. It has nothing to do with cores.

    Resource pools provide a way of prioritizing tasks by type, or restricting or allocating specific sets of resources (usually memory) to various pools.

    For example, you might create three resource pools - ETL, dashboard, and adhoc. Queries of each of those types are then ran in each of those resource pools (by assigning those users to those pools). If dashboard queries are the priority, you might decide to restrict the number of adhoc requests that get executed in the system to some number which prevents too many large, adhoc requests from taking up too many resources. You could also restrict the number of dashboard queries to another value, to prevent too many people from running too many dashboard requests simultaneously, which might flood the system. The appropriate values for MaxConcurrency is going to depend entirely on what your goals are, how many resources you have. It will also be fairly difficult to assign a set of values perfectly the first time. Resource pool tuning is usually a trial and error exercise where you may have to dial in these values over time.

  • Thank you very much Curtis, and parallelism is not associated with the number of cores? Basically we want to assign the shortest response time to select queries from a web portal without ETL tasks or users interfering.

  • marcothesanemarcothesane - Select Field - Administrator

    The only resource pool parameter controlling the core count is EXECUTIONPARALLELISM . This one controls the maximum number of cores per node assigned to a single operator in a query plan. That is, for example, a JOIN, a GROUPBY, a SCAN, etc.

    On the other hand, MAXCONCURRENCY 4, for example, is there to cause a 5th query to be queued when 4 are already running in the same resource pool. PLANNEDCONCURRENCY 6 will be used to divide MEMORYSIZE by 6 to calculate the initial memory budget for a new query in that resource pool.

    That's part of how it works ...

  • Thank you very much Marco In theory, Vertica uses one core per execution, right? So if the sum of Maxconcurrency of all the pools is 128 (cores of 1 node) and all of them have work in progress, would they all be busy? On the other hand, the Executionparallelism, the sum of all the pools should be equal to the number of cores per node? or number of cores * node number ? ( best practice)
    Best Regards

  • Excellent explanation, thank you very much for your great support my friend.

Leave a Comment

BoldItalicStrikethroughOrdered listUnordered list
Emoji
Image
Align leftAlign centerAlign rightToggle HTML viewToggle full pageToggle lights
Drop image/file