Excessive ROS Containers

We have a cluster that creates about 20x more ros containers per projection than our other clusters. The data is pretty much the same on all clusters. The cluster that is having issues is not even the largest data wise or oldest. Everything is on the latest 8.1.x patch.


  • Options
    swalkausswalkaus Vertica Employee Employee

    Without more information I could only speculate about what the cause might be. The following system tables and meta-function might provide some clues:

    • storage_containers - is the problem isolated to a subset of tables or projections?
    • partitions - are the number of distinct partitions per projection higher than on other clusters?
    • evaluate_delete_performance() - are any projections poorly optimized for replay delete plans and therefore holding up the TM?
    • tuple_mover_operations - is TM activity roughly equivalent in all clusters?
  • Options

    I manually mergedout about 15-20 of the larger ones. It started to work so I figured I would manually do everyone. It seems like once I got past the largest ones Vertica was able to start taking care of the others. So I'm guessing there was an issue with a timeout and it could never get past a certain projection.

Leave a Comment

BoldItalicStrikethroughOrdered listUnordered list
Align leftAlign centerAlign rightToggle HTML viewToggle full pageToggle lights
Drop image/file