Update fails with error 5156

saumya_tammasaumya_tamma Registered User
Hi When trying to update two separate rows from the same table in two different sessions the following behavior was observed where the update in second session fails if the first one doesn’t commit its transaction. Though this is a columnnar database shouldn't this basic scenario work? I am confused can you help. SESSION 1: vert_admin=> select * from test; n | name ---+------ 1 | A 2 | N (2 rows) vert_admin=> update test set name='B' where n=1; ------------------ (not committed yet) OUTPUT -------- 1 (1 row) At this time: vert_admin=> select * from locks; node_names | object_name | object_id | transaction_id | transaction_description | lock_mode | lock_scope | request_timestamp | grant_timestamp ----------------------------------------------------------------+-------------------+-------------------+-------------------+-------------------------------------------+-----------+-------------+-------------------------------+------------------------------- v_vertest02_node0004,v_vertest02_node0005,v_vertest02_node0006 | Table:public.test | 45035996273723794 | 45035996273758950 | Txn: a000000000d2e6 'select * from test;' | X | TRANSACTION | 2013-06-24 14:25:24.533941-07 | 2013-06-24 14:25:24.533951-07 (1 row) SESSION 2: vert_admin=> update test set name='X' where n=2; ------- (notice it is on a different row and not the same row) It hangs for 5 minutes as the LockTimeout parameter is 300s And then it fails after the timeout is crossed. vert_admin=> update test set name='X' where n=2; ERROR 5156: Unavailable: initiator locks for query - Locking failure: Timed out X locking Table:public.test. X held by [user vert_admin (select * from test;)]. Your current transaction isolation level is READ COMMITTED vert_admin=> select * from locks; node_names | object_name | object_id | transaction_id | transaction_description | lock_mode | lock_scope | request_timestamp | grant_timestamp ----------------------------------------------------------------+-------------------+-------------------+-------------------+----------------------------------------------------------------------------------+-----------+-------------+-------------------------------+------------------------------- v_vertest02_node0004,v_vertest02_node0005,v_vertest02_node0006 | Table:public.test | 45035996273723794 | 45035996273758950 | Txn: a000000000d2e6 'select * from test;' | X | TRANSACTION | 2013-06-24 14:25:24.533941-07 | 2013-06-24 14:25:24.533951-07 v_vertest02_node0004 | Table:public.test | 45035996273723794 | 45035996273758951 | Txn: a000000000d2e7 'select * from test;' | X | REQUESTED | 2013-06-24 14:26:14.673129-07 | v_vertest02_node0004 | Table:public.test | 45035996273723794 | 45035996273758967 | Txn: a000000000d2f7 'Moveout: (Table: public.test) (Projection: public.test_b1)' | U | TRANSACTION | 2013-06-24 14:27:30.010717-07 | 2013-06-24 14:27:30.010722-07 v_vertest02_node0004 | Table:public.test | 45035996273723794 | 45035996273758967 | Txn: a000000000d2f7 'Moveout: (Table: public.test) (Projection: public.test_b1)' | T | REQUESTED | 2013-06-24 14:27:30.011002-07 | (4 rows) vert_admin=> select * from configuration_parameters where parameter_name = 'LockTimeout'; node_name | parameter_name | current_value | default_value | change_under_support_guidance | change_requires_restart | description -----------+----------------+---------------+---------------+-------------------------------+-------------------------+---------------------------------------------------------- ALL | LockTimeout | 300 | 300 | t | f | Time to wait for a table lock before giving up (seconds) (1 row)

Comments

  • Hi Saumya, This is correct behavior -- Vertica only has table-level locking, not anything more granular than that. So, as you can see, the first UPDATE takes out a lock on the table. This prevents the second UPDATE from completing. Vertica is not optimized for workloads with lots of UPDATEs or DELETEs. The purpose of a Vertica database is to keep all your data. An UPDATE is, by nature, throwing out data (namely, the previous value of that field in that record). You could instead create a table that contains all records ever, and simply INSERT a new record whenever an event occurs that would change the value. Compared to UPDATE performance, the performance of this sort of INSERT statement is much better than you might expect. Adam
  • saumya_tammasaumya_tamma Registered User
    Thanks Adam this helps

Leave a Comment

BoldItalicStrikethroughOrdered listUnordered list
Emoji
Image
Align leftAlign centerAlign rightToggle HTML viewToggle full pageToggle lights
Drop image/file