Is there any way to set preference for INSERT...SELECT to fail if too few columns in SELECT?

KKirkpatrickKKirkpatrick Registered User

I've been burned by this several times: INSERT...SELECT, and inadvertantly leaving one or more columns off my SELECT STATEMENT:

 

create table temp1 (c1 varchar(1), c2 varchar(1));

insert into temp1 select 'x';

The above command runs without error, and populates column C2 with null.  I'd much prefer to have the statement raise an error.  I suppose there's a "shame on me"  aspect to this; perhaps I should be in the habit of coding all INSERTs as follows:

create table kk2796_temp1 (c1 varchar(1), c2 varchar(1));

insert into kk2796_temp1(c1, c2) select 'x';

I can only speak to my own line of thinking here, but this is how I typically reason about INSERT syntax: 

If I am inserting records into a subset of columns of a table, then I should explicitly indicate which columns should be affected.  However, if I am inserting records into all columns of a table, then I can use a shorthand (omitting column specifications on INSERT).  In other words, when I code using the form of the initial listing, my expectation is that the database engine will implicitly convert it to the second listing.

 

Clearly, this isn't the case (the first INSERT gives no error; the second one fails).  Is there a setting I can use to force the two commands to behave identically?  Or any way to get the sanity check w/out listing out all columns of a table I'm inserting into (which are sometimes hundred of columns)?  I'd even be okay with an alternative syntax, e.g.:

insert into temp1 (*) select 'x';

 

Comments

  • Adrian_Oprea_1Adrian_Oprea_1 Registered User

    You will need to put a constraint on those column such as NOT NULL

  • KKirkpatrickKKirkpatrick Registered User

    Yeah, however, that wouldn't have saved me when this behavior really caught me off guard (the overlooked tail column in that case was nullable; just not supposed to always be null).  That said, I suppose this is one more reason to keep in mind best practices of using NOT NULL in all pertinent cases.

     

    I guess a language is what it is, and this would probably cause backwards-compatibility issues if changed, so I won't push the point any further.

     

    For now, I'm just adhering to a coding standard of always listing out all columns in each table that I'm inserting into. That said, I'm using TOAD Data Point as my primary vsql IDE - is anyone familiar with TDP aware of any tricks to quickly pull a comma-delimited list of columns for a particular table (I've posted same question to the TDP forum, will update here if I get an answer)?  Or does anyone have any recommendations for alternative IDEs for vsql development?

  • Adrian_Oprea_1Adrian_Oprea_1 Registered User

    Yes , we all at some point we overlook this type of stuff .

    I've given more value to this type of work when i really became a DBA :)

     

  • Adrian_Oprea_1Adrian_Oprea_1 Registered User

    Ohh , forgot 

    To extract the columns of a table in csv format try the group_concat function.

    You just need to install the Vertica extension packages

     

    ([email protected]:5433) [dbadmin] *> \d test
    public | test | col1 | int | 8 | | t | t |
    public | test | col2 | int | 8 | | f | f |
    public | test | col3 | int | 8 | | f | f |

    ([email protected]:5433) [dbadmin] *> select group_concat(column_name) over() from columns where table_name='test';
    list
    ------------------
    col1, col2, col3

    I hope this helped 

Leave a Comment

BoldItalicStrikethroughOrdered listUnordered list
Emoji
Image
Align leftAlign centerAlign rightToggle HTML viewToggle full pageToggle lights
Drop image/file