How to limit number of concurrent very large grouping yearly mergeouts

Sergey_Cherepan_1Sergey_Cherepan_1 ✭✭✭
edited January 2023 in General Discussion

Happy New Year!

And, this is the time of the year when Vertica is doing grouping for yearly partitions.

I found that in one of my smallish Vertica clusters I have a bunch of aborted mergeouts, due to running out of temp disk space.

Investigation shows, there are two largest tables roughly same size, and they undergo merging of 2021 data into single yearly partition grouping.

There is enough temp space, and it is on same large disk as data.

Problem here is that Vertica starts at least 3 very large grouping yearly mergeout in parallel on each node.
While it is enough disk space for single yearly mergeout, database cannot fit temp space for 3 yearly mergeouts running in parallel.

I can limit max size of merged ROS, and yearly mergeouts will go through, resulting into several ROS per year. That would work but would be undesired outcome.

Apart from this very large yearly mergeouts, tuple mover works perfectly well, never had problems on this cluster.

Database easily fit several monthly mergeouts concurrently on each node.

Question: How to limit number and total size of concurrent mergeouts per node for very large mergeouts on data that is outside of active partitions?

(I can suggest MaxTotalMergeoutSize, total size of mergeouts per node cannot go above that)

Answers

  • Bryan_HBryan_H Vertica Employee Administrator

    Do the mergeouts kick off at a specific time, e.g. MoveOutInterval / MergeOutInterval ? That is, when one fails, does it attempt to resume X seconds or minutes later in line with a scheduled interval?
    If so, try to set the intervals to very long timeouts to pause automatic TM actions, then run TM manually on each projection:
    SELECT DO_TM_TASK('mergeout'[, '[[database.]schema.]{table | projection} ]');

  • Several tables, including two largest we are discussing here, are being heavily loaded, creating thousands of ROS daily.
    I think increasing mergeout intervals would produce ROS backpressure in no time.
    FYI I resorted to limiting size of mergeout, and kill all active large mergeouts.
    That addressed problem, at cost of having few ROS per year instead of one. Probably it is a reasonable compromise.
    In course of investigation, I observed that Vertica insist on having 4 very large mergeouts per node running concurrently. That definitely a problem, should be no more than one very large mergeout concurrently per node.
    All mid- and small-size mergeouts are working just fine.

  • HibikiHibiki Vertica Employee Employee

    How about decreasing PLANNEDCONCURRENCY and MAXCONCURRENCY of tm Resource Pool until the yearly mergeout is completed?

    Please refer to the following page to know how Vertica assigns the Mergeout threads to the partitions.
    https://www.vertica.com/docs/12.0.x/HTML/Content/Authoring/AdministratorsGuide/ResourceManager/Scenarios/TuningTupleMoverPoolSettings.htm

  • Thanks for advice, that may be would work. But only on weekends when load to tables are not that active.
    Reducing mergeout concurrency most likely would cause ROS backpressure, if done during business hours.
    I am kinda settled with having more than one ROS per year, that is not very bad.
    I believe Vertica need to do something to address issue properly. No more than one concurrent huge mergeout per node!

  • HibikiHibiki Vertica Employee Employee
    edited January 2023

    Yes, right. If frequent data loads are executed during the yearly mergeout with small concurrency, it may hit the ROS pushback. Please let me raise an enhancement request. I cannot commit now the engineering team will accept the request and when it will be implemented. So please manage the mergeout with the workarounds.

    BTW, did you see the huge temporary file in the temp directory, or it used almost double size since TM generated new ROS files during the mergeout?

  • @Hibiki ,

    Let me know if I need to open service request. Yes, please proceed and open enhancement report.

    When yearly mergeout happens, I can see several deleted temp files in DATA dir. They are growing, and finally fill out all disk space. Next, mergeout process on node get killed with error "cannot allocate 1MB on disk", and disk space being freed. This happens in infinite loop, I observed this behaviour for few days.

  • HibikiHibiki Vertica Employee Employee

    @Sergey_Cherepan_1 Yes, please open a new service request and provide the results of your investigation. That information is very helpful for me to raise an enhancement request.

Leave a Comment

BoldItalicStrikethroughOrdered listUnordered list
Emoji
Image
Align leftAlign centerAlign rightToggle HTML viewToggle full pageToggle lights
Drop image/file