Why are tuple mover operations ("move storages" taking up 80% of disk i/o for the last week and a ha

We loaded about 20 billion rows to a table that has about 6 paritions (by month).  This was over a week ago.  But tuple mover has been running ever since doing "move storage" operations on the data.  This is consuming 80% of i/o for 20 minutes at a time.  It then pauses about 5 minutes before doing it again. 

Is this normal?  Will it ever stop?  How do I know it is getting closer to finishing and not just stuck?


  • Options
    What is your load schedule/load type or your data modification rate ? 
  • Options
    We are no longer loading data into the schema in question and I set the AHM to now.  It was a one time load for testing purposes.   We are continuing to trickle load on a frequent basis into 4 other schemas (but not at very high rates). 
  • Options
    What you can do is review you tm pool and if possible alter it to  perform more operations concurrently by increasing the TM resource pool.
    -this values will correspond be up to you actual resources.
    Also take a look at you RESOURCE_QUEUES ,RESOURCE_ACQUISITIONS table during you lag/slow tuple movement.
    One other thing is the (important) when having partitions the number of ROS containers ideally is to create fewer than 20 partitions and avoid creating more than 50 per partition as during delete/alter operation all contained containers will be opened. An ideal value as per Vertica documentation is 20 partitions.(as Vertica doc states)

    -also monitor the tuple_mover_operations during you peak hours:
    select * from v_monitor.tuple_mover_operations where is_executing='t';
  • Options
    I did that earlier today and it didn't make any difference from what I can tell.  The i/o is still at 80% most of the time.  The tuple_mover_operation shows 1 to 3 operations at a time.  The CPU and memory usage don't seem to have gone up.
  • Options
    We're having a similar issue here, though in a slightly different scenario: We had a 6.1.2 cluster, in which we tested using an external NAS storage location. After we finished the test, we deleted all policies, and tried to remove the storage location, but with no success. It appears that Vetica left there some data, although judging by the system tables nothing should have been there. So we left it as is, becuase it wasn't production. Two days ago we upgraded this cluster to version 7.0.1. Since the upgrade we've been seeing exactly what Chris describes here - endless "Move Storages" operations. Another interesting fact is that now we were finally able to drop the external storage location, and finally it is indeed empty. So I'm asking myself what exactly is it doing... Chris - did the Move Storages operations ever stop? Or did you give up on it? Thanks, Dan

Leave a Comment

BoldItalicStrikethroughOrdered listUnordered list
Align leftAlign centerAlign rightToggle HTML viewToggle full pageToggle lights
Drop image/file