My schema has few tables which i want to export to S3.
The collective size of the schema is close to 5 TB with the maximum size for a table is close to 3 TB having around 7 billion rows.
What is the fastest method to export this huge data to S3?
The best way to export your data to S3 is to use the S3EXPORT function in the Vertica AWS Library. This function requires you to configure your Vertica database with your AWS credentials. So make sure to do that before you try exporting.
Below is a quick example and some links to the documentation to get you started:
-- Configure Vertica so it can authenticate with AWS => ALTER SESSION SET UDPARAMETER FOR awslib aws_id='ABYHGDGDGDGDGDGD'; => ALTER SESSION SET UDPARAMETER FOR awslib aws_secret='not-a-real-secret7778hfhfhf'; => ALTER SESSION SET UDPARAMETER FOR awslib aws_region='us-east-1'; -- Export to an S3 bucket => SELECT S3EXPORT( * USING PARAMETERS url='s3://exampleBucket/object') OVER(PARTITION BEST) FROM your_table;
Let me know if this helps.