The Vertica Forum recently got a makeover! Let us know what you think by filling out this short, anonymous survey.

Auto update Last Modification Time of an entry to the vertica table

Like mysql allows to provide "ON UPDATE" with the create clause to support auto updation of the last modified time, do we have any such support in vertica?
REF: https://dev.mysql.com/doc/refman/8.0/en/timestamp-initialization.html

Best Answer

Answers

  • Unnecessary. Just give the column a DEFAULT.

    dbadmin=> create table ts (id int, last_ts datetime default sysdate() ) ;
    CREATE TABLE
    dbadmin=> insert into ts (id) values(1) ;

    OUTPUT

      1
    

    (1 row)
    dbadmin=> select * from ts ;
    id | last_ts
    ----+----------------------------
    1 | 2019-08-31 17:12:03.733695
    (1 row)

  • Thanks for your response, I'm using the above for the created time.
    The above goes untouched in case of modification.
    What I'm looking for is when was the given row last modified, request to suggest a solution for the same.

  • Thanks got it, would try adding it while making the column entry itself.

  • Hi Kanishka,
    Can you please describe your use-case a bit more? Trying to find equivalence of every construct from a transactional database in Vertica, which is an Analytic DB platform may not be the best approach. Though Vertica supports ACID compliant Transactions, its main strength is Analytic Workload.
    Specific to the use-case you have described, may be you can use EPOCH associated with every ROW in every table.

    dbadmin=> create schema sp;
    CREATE SCHEMA
    dbadmin=> create table sp.t1 (c1 identity, c2 varchar(5), insdt datetime default sysdate(), insusr varchar(20) DEFAULT current_user(), upddt datetime default sysdate(), updusr varchar(20) DEFAULT current_user());
    CREATE TABLE
    dbadmin=> insert into sp.t1 (c2) values('AAA');
     OUTPUT 
    --------
          1
    (1 row)
    
    dbadmin=> insert into sp.t1 (c2) values('BBB');
     OUTPUT 
    --------
          1
    (1 row)
    
    dbadmin=> insert into sp.t1 (c2) values('CCC');
     OUTPUT 
    --------
          1
    (1 row)
    
    dbadmin=> commit;
    COMMIT
    dbadmin=> select * from sp.t1;
     c1 | c2  |           insdt            | insusr  |           upddt            | updusr  
    ----+-----+----------------------------+---------+----------------------------+---------
      1 | AAA | 2019-09-01 19:54:09.249112 | dbadmin | 2019-09-01 19:54:09.249112 | dbadmin
      2 | BBB | 2019-09-01 19:54:09.260224 | dbadmin | 2019-09-01 19:54:09.260224 | dbadmin
      3 | CCC | 2019-09-01 19:54:09.264122 | dbadmin | 2019-09-01 19:54:09.264122 | dbadmin
    (3 rows)
    

    LEt us check the EPOCH of the rows commited so far...

    dbadmin=> select t1.*, epoch from sp.t1;
     c1 | c2  |           insdt            | insusr  |           upddt            | updusr  | epoch 
    ----+-----+----------------------------+---------+----------------------------+---------+-------
      1 | AAA | 2019-09-01 19:54:09.249112 | dbadmin | 2019-09-01 19:54:09.249112 | dbadmin |    40
      2 | BBB | 2019-09-01 19:54:09.260224 | dbadmin | 2019-09-01 19:54:09.260224 | dbadmin |    40
      3 | CCC | 2019-09-01 19:54:09.264122 | dbadmin | 2019-09-01 19:54:09.264122 | dbadmin |    40
    (3 rows)
    
    dbadmin=> select get_current_epoch(),get_last_good_epoch();
     get_current_epoch | get_last_good_epoch 
    -------------------+---------------------
                    41 |                  39
    (1 row)
    
    dbadmin=> update sp.t1 set c2 = 'YYY' where c1 = 2;
     OUTPUT 
    --------
          1
    (1 row)
    
    dbadmin=> commit;
    COMMIT
    dbadmin=> select t1.*, epoch from sp.t1;
     c1 | c2  |           insdt            | insusr  |           upddt            | updusr  | epoch 
    ----+-----+----------------------------+---------+----------------------------+---------+-------
      1 | AAA | 2019-09-01 19:54:09.249112 | dbadmin | 2019-09-01 19:54:09.249112 | dbadmin |    40
      3 | CCC | 2019-09-01 19:54:09.264122 | dbadmin | 2019-09-01 19:54:09.264122 | dbadmin |    40
      2 | YYY | 2019-09-01 19:54:09.260224 | dbadmin | 2019-09-01 19:54:09.260224 | dbadmin |    41
    (3 rows)
    
    dbadmin=> update sp.t1 set c2 = 'XXX', upddt = DEFAULT, updusr=DEFAULT where c1 = 2;
     OUTPUT 
    --------
          1
    (1 row)
    
    dbadmin=> commit;
    COMMIT
    dbadmin=> select t1.*, epoch from sp.t1;
     c1 | c2  |           insdt            | insusr  |           upddt            | updusr  | epoch 
    ----+-----+----------------------------+---------+----------------------------+---------+-------
      1 | AAA | 2019-09-01 19:54:09.249112 | dbadmin | 2019-09-01 19:54:09.249112 | dbadmin |    40
      3 | CCC | 2019-09-01 19:54:09.264122 | dbadmin | 2019-09-01 19:54:09.264122 | dbadmin |    40
      2 | XXX | 2019-09-01 19:54:09.260224 | dbadmin | 2019-09-01 19:58:17.861181 | dbadmin |    42
    (3 rows)
    
    dbadmin=> 
    

    Hope this helps to implement an audit mechanism.

  • Thanks Sudhakar_B for your response, the prime use case of the last modified date I was looking forward is debug.
    Epoch surely seems a good way to get the relative order of the entries made but not too sure how shall that be mapped against time as that become of prime importance during debugging process.
    One of the way as Vertica_Curtis suggested we can make entry into the table at the time of updation itself , I'm not sure if there are alternatives to it.

  • Thanks for clarification Kanishka,
    If the use-case is debugging, then you can potentially use to_timestamp to convert which is an APPROXIMATION.
    Also, data collector table dc_projection_checkpoint_epochs can potentially be used for converting Checkpoint epochs into timestamp.
    Certainly, please test it out and share your findings. It might help other members.

    Please note: AFAIK, epoch to Time stamp conversion is only approximation in Vertica

    select time, checkpoint_epoch   
    from dc_projection_checkpoint_epochs
    where time > sysdate -1/48
    order by 2 desc;
    time                 checkpoint_epoch  
    -------------------  ----------------  
    2019-09-04 19:07:26  62                
    2019-09-04 19:04:45  60                
    2019-09-04 19:02:26  60                
    2019-09-04 19:02:14  59                
    2019-09-04 18:57:26  59                
    2019-09-04 18:57:08  53                
    2019-09-04 18:47:26  53                
    2019-09-04 18:57:08  53                
    

    However, if this use-case ever evolves into audit tracking requirements etc, then please use instrumentation of ETL as recommended by @Vertica_Curtis.

  • You can get the epochs->time approximation from the EPOCHS table as well, but that table only holds the current active epochs. If you want them to last longer, you'll have to increase the HistoryRetentionInterval. So, dc_projection_checkpoint_epoch might be a better option.

  • moshegmosheg Employee

    In addition to the above, consider the following two options.
    You can avoid DELETEs, UPDATEs and MERGEs.
    This will reduce delete vectors to improve performance and manage data versioning for debug.
    Instead of doing deletes or update to your data, try to use Live Aggregate Projection (LAP) TOP-K with the last version of the data.
    In the base (anchor) table you would just do INSERTs.
    The anchor table will always increase its size since we insert new data to “replace” or “hide” the old data.
    From time to time (every year or every several months) you can “purge” the anchor table to get rid of old versions of the data.
    Like so:
    Step1: CREATE TABLE new_base_table AS SELECT * from LAP (you create a new table with the “last version” of the data)
    Step2: DROP TABLE base_table CASCADE (drop the old base table, it’s projections and LAP)
    Step3: ALTER TABLE new_base_table RENAME base_table (rename the “last version” of the data)
    Step4: recreate LAP
    From now on you continue INSERTing into the base table as usual.
    This method will let you know the row creation time and “update/insert” history.

    The other option is to copy from time to time the data from dc_requests_issued to a regular table you can maintain
    to research for data changes.
    dc_requests_issued (might change) currently include the following fields:
    time
    node_name
    session_id
    user_id
    user_name
    transaction_id
    statement_id
    request_id
    request_type
    label
    client_label
    search_path
    query_start_epoch
    request (for example SELECT x,y,z from mytable;)
    is_retry

  • Additionally just found a post from @Jim_Knicely (from 2014!!!) which provides some details on v_internal.dc_transaction_ends table.
    Wow Time just flies.

    dc_ tables are internal to Vertica and CAN change without notice. Use it at your own risk!

    https://forum.vertica.com/discussion/219883/is-it-possible-to-know-when-the-data-in-a-particaluar-table-was-updated
    Hope this helps you.

Leave a Comment

BoldItalicStrikethroughOrdered listUnordered list
Emoji
Image
Align leftAlign centerAlign rightToggle HTML viewToggle full pageToggle lights
Drop image/file

Can't find what you're looking for? Search the Vertica Documentation, Knowledge Base, or Blog for more information.