Data Consistency during loading, insert, update, delete

How Vertica makes sure that there is a data consistency during LOADING, INSERT, UPDATE and DELETE?

Assume that there is a million records file to be loaded to a table in Vertica.  The load job fails at 500,001.  How does Vertica handles it?  Will it roll back the records?  is there a way to restart the job at 500,001 record?

Similarly, what happens if Insert/update/delete fails during the operation?


  • Options
    Vertica is an atomic/transaction system; every operation either runs to completion or fails and is completely rolled back.  It is not possible to restart queriesoperations in the middle.  If you need restartability, we suggest breaking your task up into multiple queries, with occasional COMMIT statements to save the current state.

    Loading data via COPY is a slightly special case:  If a single row fails to parse, that is not a query error by default.  Instead, it is placed into a "rejected" bin, which (depending on the arguments to COPY) shows up either as a file or in a special rejections table.  So at the end of the COPY statement, you have a lot of rows in your table and some rejected rows to go back and re-load with a subsequent statement.

    COPY's behavior here is very configurable; the above is the default, but you can get many different error-condition behaviors by passing different options to COPY.
  • Options
    Thanks for explaining it in detail.  That helps.

Leave a Comment

BoldItalicStrikethroughOrdered listUnordered list
Align leftAlign centerAlign rightToggle HTML viewToggle full pageToggle lights
Drop image/file