Slow copy performance

wooby01wooby01 Community Edition User

Hi all,

I'm having slow performance while getting data from Oracle database through Talend tool. Select query on Oracle side completed pretty quickly (around 5 mins). But inserting selected data into Vertica is taking so long time.
I have 3 jobs running on Talend and they're still working since yesterday 5pm. First job shows 155 rows/s, second is 553 rows/s, third is 257 rows/s.
Checked server resource usage with below result.
Memory : Total 64GB used 5GB
CPU : 80% used
Storage : 15GB used, enough free space left.
Network : 12mbps on receive side, server has 10G connection. But talend host has 1G connection.

I have a cluster with 1 node and 1 database. 3 jobs inserting data into 3 separate tables.
I think there's something preventing to work copy command with full resource. Have you ever face similar situation?

Thanks everyone.

Answers

  • Bryan_HBryan_H Vertica Employee Administrator

    Can you share the following:
    select * from resource_pools; -- shows current resource pool settings
    select * from resource_pool_status; -- show current resource pool usage
    My guess is that you'll see that the status table shows that the resource pool is not being fully utilized because it's subject to a query budget. We can adjust the settings or move the jobs into a pool with more resources and correct query budget.

  • wooby01wooby01 Community Edition User

    Hi Bryan,

    Thank you. Yeah, it seems resource pool is not being fully utilized due to the settings. Sorry for sharing the results here as text. The company security policy is not allowing me to upload any type of files.
    dbadmin=> select * from resource_pools;
    pool_id | name | subcluster_oid | subcluster_name | is_internal | memorysize | maxmemorysize | maxquerymemorysize | executionparallelism | priority | runtimepriority | runtimeprioritythreshold | queuetimeout | plannedconcurrency | maxconcurrency | runtimecap | singleinitiator | cpuaffinityset | cpuaffinitymode | cascadeto | cascadetosubclusterpool
    -------------------+----------+----------------+-----------------+-------------+------------+---------------+--------------------+----------------------+----------+-----------------+--------------------------+--------------+--------------------+----------------+------------+-----------------+----------------+-----------------+-----------+-------------------------
    45035996273704998 | general | 0 | | t | | Special: 95% | | AUTO | 0 | MEDIUM | 2 | 00:05 | AUTO | | | f | | ANY | |
    45035996273705000 | sysquery | 0 | | t | 1G | | | AUTO | 110 | HIGH | 0 | 00:05 | AUTO | | | f | | ANY | |
    45035996273705002 | tm | 0 | | t | 4G | | | AUTO | 105 | MEDIUM | 60 | 00:05 | AUTO | 7 | | t | | ANY | |
    45035996273705004 | refresh | 0 | | t | 0% | | | AUTO | -10 | MEDIUM | 60 | 00:05 | 4 | | | t | | ANY | |
    45035996273705006 | recovery | 0 | | t | 0% | | | AUTO | 107 | MEDIUM | 60 | 00:05 | AUTO | 5 | | t | | ANY | |
    45035996273705008 | dbd | 0 | | t | 0% | | | AUTO | 0 | MEDIUM | 0 | 0 | AUTO | | | t | | ANY | |
    45035996273705086 | jvm | 0 | | t | 0% | 2G | | AUTO | 0 | MEDIUM | 2 | 00:05 | AUTO | | | f | | ANY | |
    45035996273705096 | blobdata | 0 | | t | 0% | 10% | | | | | | | AUTO | | | | | | |
    45035996273705098 | metadata | 0 | | t | 0% | | | | | | | | | | | | | | |
    (9 rows)

  • wooby01wooby01 Community Edition User

    dbadmin=> select * from resource_pool_status;
    node_name | pool_oid | pool_name | is_internal | memory_size_kb | memory_size_actual_kb | memory_inuse_kb | general_memory_borrowed_kb | queueing_threshold_kb | max_memory_size_kb | max_query_memory_size_kb | running_query_count | planned_concurrency | max_concurrency | is_standalone | queue_timeout | queue_timeout_in_seconds | execution_parallelism | priority | runtime_priority | runtime_priority_threshold | runtimecap_in_seconds | single_initiator | query_budget_kb | cpu_affinity_set | cpu_affinity_mask | cpu_affinity_mode
    ----------------+-------------------+-----------+-------------+----------------+-----------------------+-----------------+----------------------------+-----------------------+--------------------+--------------------------+---------------------+---------------------+-----------------+---------------+---------------+--------------------------+-----------------------+----------+------------------+----------------------------+-----------------------+------------------+-----------------+------------------+-------------------+-------------------
    v_vdb_node0001 | 45035996273704998 | general | t | 55098490 | 55098490 | 0 | 0 | 52343564 | 55098490 | | 0 | 8 | | t | 00:05 | 300 | AUTO | 0 | MEDIUM | 2 | | false | 6542945 | | ff | ANY
    v_vdb_node0001 | 45035996273705000 | sysquery | t | 1048576 | 1048576 | 40207 | 0 | 53410196 | 56221260 | | 1 | 8 | | f | 00:05 | 300 | AUTO | 110 | HIGH | 0 | | false | 131072 | | ff | ANY
    v_vdb_node0001 | 45035996273705002 | tm | t | 4194304 | 4194304 | 0 | 0 | 56398636 | 59366988 | | 0 | 7 | 7 | f | 00:05 | 300 | AUTO | 105 | MEDIUM | 60 | | true | 599186 | | ff | ANY
    v_vdb_node0001 | 45035996273705004 | refresh | t | 0 | 0 | 0 | 0 | 52343564 | 55098490 | | 0 | 4 | | f | 00:05 | 300 | AUTO | -10 | MEDIUM | 60 | | true | 13085891 | | ff | ANY
    v_vdb_node0001 | 45035996273705006 | recovery | t | 0 | 0 | 0 | 0 | 52414048 | 55172684 | | 0 | 10 | 5 | f | 00:05 | 300 | AUTO | 107 | MEDIUM | 60 | | true | 5234356 | | ff | ANY
    v_vdb_node0001 | 45035996273705008 | dbd | t | 0 | 0 | 0 | 0 | 52414048 | 55172684 | | 0 | 4 | | f | 0 | 0 | AUTO | 0 | MEDIUM | 0 | | true | 13085891 | | ff | ANY
    v_vdb_node0001 | 45035996273705086 | jvm | t | 0 | 0 | 0 | 0 | 1992294 | 2097152 | | 0 | 8 | | f | 00:05 | 300 | AUTO | 0 | MEDIUM | 2 | | false | 249036 | | ff | ANY
    v_vdb_node0001 | 45035996273705096 | blobdata | t | 0 | 0 | 0 | 0 | 5739478 | 6041556 | | 0 | 2 | 0 | f | 0 | 0 | AUTO | 0 | HIGH | 0 | | false | | | ff | ANY
    v_vdb_node0001 | 45035996273705098 | metadata | t | 74194 | 74194 | 0 | 0 | 52414048 | 55172684 | | 0 | 1 | 0 | f | 0 | 0 | AUTO | 108 | HIGH | 0 | | false | | | ff | ANY
    (9 rows)

  • Bryan_HBryan_H Vertica Employee Administrator

    No running queries currently, but I can guess what is happening: Vertica allocates memory to jobs by computing a query budget equal to (pool memory / planned concurrency). So for general pool, this is 55G / 8 = 6.875GB. The idea behind query budget is to ensue that enough resources ae held free in case Vertica needs to serve 8 requests of that size. This also applies to the Tuple Mover (tm) pool which handles projection updates. So let's assume you will have the 3 jobs plus allow 1 slot just in case:
    ALTER RESOURCE POOL general PLANNEDCONCURRENCY 4;
    ALTER RESOURCE POOL tm PLANNEDCONCURRENCY 4;
    This should double the available query budget and hopefully speed things up. One more thing though - with GB of memory already available, you should be seeing thousands of rows per job. Did you check the Talend integration guide for best practices: https://www.vertica.com/kb/Vertica-Integration-with-Talend-Connection-Guide/Content/Partner/Vertica-Integration-with-Talend-Connection-Guide.htm

Leave a Comment

BoldItalicStrikethroughOrdered listUnordered list
Emoji
Image
Align leftAlign centerAlign rightToggle HTML viewToggle full pageToggle lights
Drop image/file