ANALYZE_STATISTICS - CPU/memory consumption
vcarusi
✭✭
Hi,
I have a standalone server on Ubuntu 16.4, Vertica9.2.
Every time when a ANALYZE_STATISTICS is executed, the CPU and memory consumptions have a jump. It seems these resources are not release at the end. As consequence, after few days the vertica server crash due to lack of memory.
I am not sure who is guilty : vertica or ubuntu ?
Did someone face with this behavior ?
Thank you,
Veronica
0
Comments
There is system table where are logged the events that force CPU/memory to grow ?
The QUERY_CONSUMPTION table has a ton of great info on a query, including CPU cycles consumed and memory reserved and allocated.
See:
https://www.vertica.com/docs/9.2.x/HTML/Content/Authoring/SQLReferenceManual/SystemTables/MONITOR/QUERY_CONSUMPTION.htm
Unfortunately ANALYZE_STATISTICS commands are not recorded in this table.
Thank you!
v_monitor.memory_usage is a kind of log/history.
I would like to understand what events/queries force vertica server to allocate memory.
I have, at least once per per every week, a crash due to lack of memory. I need to understand its cause. I started closing processes one by one. It seems that ANALYZE_STATISTICS could be a cause.
and here is the memory consumption with the process that gathers statistics stopped.
This process collects statistics for a 90 GB table, with 50 as parameter, every 3 h.
Anyway, thank Jim for your support.
is there a way to release memory ?
Can you check if there is memory in use in your resource pools? There should not be any memory in use if the running query count is 0.
Thank you, Jim!
Again I stopped all processes, included ANALYZE_STATISTICS.
Memory is not released.
select pool_name, memory_size_kb, memory_size_actual_kb, memory_inuse_kb, general_memory_borrowed_kb, max_memory_size_kb, running_query_count
from resource_pool_status;
select * from v_monitor.memory_usage
order by start_time desc;
Any idea?
How much memory is the Vertica process using?
Example. Here is an idle node on my cluster:
And after a clean restart:
But as I use more resources, the process will allocate more RAM.
This memory is not released. It just remains allocated.
Vertica will allocate up the MAXMEMORYSIZE of the General Pool.
In my case:
07:55:27 up 19:23, 2 users, load average: 0.80, 0.88, 1.49
Tasks: 158 total, 1 running, 157 sleeping, 0 stopped, 0 zombie
%Cpu(s): 16.2 us, 2.2 sy, 0.1 ni, 81.5 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 31398092 total, 1963840 free, 21478656 used, 7955596 buff/cache
KiB Swap: 2097148 total, 1743752 free, 353396 used. 9467712 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2864 dbadmin 20 0 26.573g 0.019t 39856 S 4.0 65.7 1538:53 vertica
1506 dbadmin 20 0 1251452 20712 6032 S 1.0 0.1 113:25.70 python
23 root 20 0 0 0 0 S 0.3 0.0 0:11.51 ksoftirqd/3
5410 ubuntu 20 0 92800 3228 2328 S 0.3 0.0 0:33.38 sshd
1 root 20 0 37728 4728 3012 S 0.0 0.0 0:04.27 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd
Memory used : 65%
You are right. After restart. the memory is released.
But it isn't a suitable solution because the server is a production one.
So
in order to keep the memory under control should be identified the vertica processes that force vertica server to ask for memory from system
thank you
This shouldn't be an issue if you are following the recommendation:
See:
https://www.vertica.com/docs/9.2.x/HTML/Content/Authoring/InstallationGuide/BeforeYouInstall/platformhardwarereqts.htm
You can allocate less memory to Vertica by lowering the MAXMEMORYSIZE of the General Pool. But I wouldn't recommend that on a production system.
See:
https://www.vertica.com/docs/9.2.x/HTML/Content/Authoring/SQLReferenceManual/Statements/Built-inPoolConfiguration.htm
https://www.vertica.com/docs/9.2.x/HTML/Content/Authoring/AdministratorsGuide/Statistics/BestPracticesForStatisticsCollection.htm?tocpath=Administrator's Guide|Collecting Database Statistics|_____6
Overhead Considerations
Running ANALYZE_STATISTICS is an efficient but potentially long-running operation. You can run it concurrently with queries and loads in a production environment. However, the function can incur considerable overhead on system resources (CPU and memory), at the expense of queries and load operations. To minimize overhead, run ANALYZE_STATISTICS on individual tables rather than the entire database, or on individual table columns.
What are the configuration parameters regarding ANALYZE_STATISTICS ?
I found only AnalyzeRowCountInterval. Its default value corresponds to 24h.
i checked with
select statistics_updated_timestampt
from V_CATALOG.PROJECTION_COLUMNS
and statistics_updated_timestamp was 2 days ago.
I battled that memory leak also when runned SELECT Analyze_statistics('') it caused memory leaks (4G day ) then I made script that gets
vsql -t -c "SELECT 'Select now(),'''||table_name ||''',analyze_statistics('''||table_schema ||'.' ||table_name||''');' FROM v_catalog.tables;" -o /home/dbadmin/scripts/tmp/analyze.sql
And you can then run it.
Even better you can split and run in parallel in all your nodes
split -d -l 300 analyze.sql anal
for f in anal??; do mv "$f" $f.sql; done
ls anal??.sql | xargs -n 1 -P 16 -I cmd sh -c 'sleep 1 && /opt/vertica/bin/vsql -C -t -e -f cmd -o /home/dbadmin/maintlog/analyze/A_cmd_.log'>>/home/dbadmin/maintlog/analyze/out_sql.log
It does analyse very fast and no memoryleaks