Task Tuple Mover "Move Storages" is working nonstop
sergey_h
Vertica Customer ✭
Hi guys!
I have Vertica cluster on 8 nodes(version 9.2.1).
On 6 of 8 nodes task Tuple Mover "Move Storages" is working nonstop and creating high load on disk IO.
But i didn't execute any storage location functions.
I executed select close_session and restart cluster, but TM task restarted.
I don't understend, why this task run.
I have only one location for DATA,TEMP files on each node and I'm not use storage policy
How I can stop this task?
0
Comments
There should not be any move storage in your case. Can you please open support ticket and upload scrutinize. If you have support ticket open please share ticket number for me to investigate this further.
Can you please confirm you don't have storage policies by running: select * from storage_policies;
I found one scrutinize that our customer sent where TM_MOVESTORAGE was running on 6 out of 8 node and they had storage_policy defined on 283 objects. If you found entries in storage_policies , you can use CLEAR_OBJECT_STORAGE_POLICY API to clear these policies.
There might be a "business" reason for the storage policies so I wouldn't blindly clear them
@Jim_Knicely If you have only one storage location you will not need storage policy.
@skamat I have open support ticket - SD02495550
@skamat When i founded issue with TM "Move storage", i was check storge policy and storage location.
I have only one stoge location(type DATA,TEMP) and 286 storage policy. About two/three months ago i moved main storage location to new place and these storage policy remained. I cleared these policies, but this not resolved my issue.
I have resolved issue today, but i don't understand the cause of the problem. I have two storage location with type "USER", i used them a lot of time and all was good. I droped one of them and task TM "Move storage" stopped.
I have reproduced in our lab and reported this JIRA to be fixed with high priority. Thanks for bringing to our attention.
Sergey, You may need user locations. Better workaround is to drop label on data location . We are working on fixing this issue.