Insufficient resources to execute plan on pool general [Timedout waiting for resource request: Reque
We recently upgraded our Vertica database server from 6.4 to Vertica Analytic Database v7.1.0-1. We use Talend, and since the upgrade, we have had issues with Talend jobs failing on our Vertica server due to memory issues. The most reliable repro is a daily job - which previously always ran successfully, in less than 4 minutes - and now it times out and dies. The error on the db logs: 2014-09-19 10:56:15.533 Init Session:0x2b3e70013220-a000000742bfd5 @v_pdw_node0001: 53000/3587: Insufficient resources to execute plan on pool general [Timedout waiting for resource request: Request exceeds limits: Memory(KB) Exceeded: Requested = 4288595, Free = 118927 (Limit = 19981816, Used = 19862889) (queueing threshold)] LOCATION: Exec_compilePlan, /scratch_a/release/vbuild/vertica/Dist/Dist.cpp:1217 2014-09-19 10:56:16.263 Init Session:0x2b3e701a4390-a000000742bfdf [Txn] Starting Commit: Txn: a000000742bfdf 'select atl.id, atl.APP_DOMAIN, atl.CREATE_DATE, atl.TRANS_ID, atl.SOURCE_PARTNER, atl.LAST_NAME, atl.FIRST_NAME, bcf.LP_ID LOYALTY_PROFILE_ID, atl.MEMBER_ID, bcf.TX_ID ORDER_ID, atl.AMOUNT, atl.BATCH_ID, atl.BATCH_KEY from"STAGING".JPMC_ATL_VW atl left outer join "STAGING".JPMC_BCF_VW bcf on atl.APP_DOMAIN = bcf.APP_DOMAIN and atl.TRANS_ID = bcf.MINOR_KEY where atl.BATCH_KEY > 12303 or bcf.BATCH_KEY > 12303 ;' Some interesting notes: • We have a single node with 28GB RAM • the identical job runs on our Staging Vertica server (which is also Vertica v7.1.0-1 and is the same as production in terms of the volume of data, single node with less memory) – with no problems • execution of the identical job through the Talend Studio, succeeds – Vertica is happen enough to run the query and return the data when run in that mode • Talend version is 5.4.1 Things that we have tried that have not made any difference: • Increase the queuetimeout from 600 to 900 to 1200 of the general resource pool. Performance only degraded. • Increase maximum memory maps configuration of the db server • Modified plannedconcurrency to AUTO • Created new projections for the tables the query is using. Our questions – • has anyone seen these sort of memory issues come up after upgrading to Vertica v7.1.0? • how can we resolve this? what else should we try? Thanks in advance for your help.
0
Comments
select node_name, pool_name, memory_inuse_kb,running_query_count from resource_pool_status ;
is it the same pool where the query used to run?
Get the transaction_id and statement_id
dbadmin=> select node_name, request_type, pool_name, memory_kb, queries, succeeded, failing_resource from dc_resource_acquisitions where transaction_id = <> and statement_id = <>;
Also need to check query_budget_kb in all the pools ( specially where you are running the query)
select * from sessions; should also give some idea about how many sessions are running.
check other columns in the resource_pool_status pool.
Try changing the planned_concurrency ( lower the value) in resource_pools table for the general pool where your are running the query. This should allow more query budget memory.
It should be more simple .
Check and compare the execution plan you have between both Staging and the current one .
Do you gather statistics ?