Data Consistency during loading, insert, update, delete
How Vertica makes sure that there is a data consistency during LOADING, INSERT, UPDATE and DELETE?
Assume that there is a million records file to be loaded to a table in Vertica. The load job fails at 500,001. How does Vertica handles it? Will it roll back the records? is there a way to restart the job at 500,001 record?
Similarly, what happens if Insert/update/delete fails during the operation?
Assume that there is a million records file to be loaded to a table in Vertica. The load job fails at 500,001. How does Vertica handles it? Will it roll back the records? is there a way to restart the job at 500,001 record?
Similarly, what happens if Insert/update/delete fails during the operation?
0
Comments
Loading data via COPY is a slightly special case: If a single row fails to parse, that is not a query error by default. Instead, it is placed into a "rejected" bin, which (depending on the arguments to COPY) shows up either as a file or in a special rejections table. So at the end of the COPY statement, you have a lot of rows in your table and some rejected rows to go back and re-load with a subsequent statement.
COPY's behavior here is very configurable; the above is the default, but you can get many different error-condition behaviors by passing different options to COPY.