NewBie Question. Multiple inserts fail after 1024 operations in a single transaction)
If a copy operation fails I fall back a to a single transaction that contains multiple row by row inserts (Each insert is wrapped with a create / release save point in case that is relevant). This works in 9.0.x but fails with "Too many ROS containers exist for the following projections:" after 1024 operations in 10.
My understanding that this is probably caused by the removal of WOS in 9.3. However my understanding was that multiple inserts inside a single transaction should go the same ROS file, so I don't know why I'm failing after 1024 reecords Assuming my understanding is incorrect I tried added a call to mergeout but that didn't fix the problem (I suspect I need to wait for the mergeout to complete), however I'd prefer to find a way to manage my row by inserts so as not to create the problem in the first place
Answers
Vertica depracated (i.e. removed) the WOS completly in Vertica 10 so you are now inserting directly into ROS.
Each INSERT statement is creating a new ROS container.
Example:
Eventually the Tuple Mover will reduce the container count.
Here I am doing manually:
You may need to rethink what it is you are trying to achieve.
Can you more gracefully handle the initial COPY statement errors so that you don't have to revert to using indivual INSERT statements as this is not the best use case for a columnar database like Vertica.
What is the max number of INSERT statement that you'd have to run? There is a configuration parameter named ContainersPerProjectionLimit that defaults to 1024 and you can increase it.
But that comes with this warning:
Jim
Thanks for correcting my understanding.. I had got the impression that it was a ROS per transaction from somewhere. The problem I am hitting with COPY is a memory limitation in this case (see my other post).
https://forum.vertica.com/discussion/242201/copy-statement-fails-in-10-1-works-in-9-0#latest
I also have another scenario where I currently have to resort to inserts which is dealing with an empty string. If I have ,"", in my CSV this seems to result in a NULL, rather than an empty string being inserted (even with trim=false).
In both of these scenarios I am forced to fallback to row by row inserts.
It appears that if I manually invoke the tuple mover it is an asynchronous operation. Is there a way to get notified (as distinct from polling to ensure that the TUPLE moving has completed it's task.
Just to complicate matters I am using the Postgres Node.js driver (PG) to work with Vertica
Jim
Tried triggering the Tuple Mover... I think it's failing with the same issue that my copy operation fails with...