vertica spark connector issue
I'm trying to write spark DataSet to vertica with the latest spark connector. But can't create the BaseRelation with below error.
Exception in thread "streaming-job-executor-1" java.lang.AbstractMethodError: com.vertica.spark.datasource.DefaultSource.createRelation(Lorg/apache/spark/sql/SQLContext;Lorg/apache/spark/sql/SaveMode;Lscala/collection/immutable/Map;Lorg/apache/spark/sql/Dataset;)Lorg/apache/spark/sql/sources/BaseRelation;
at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:442)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:211)
This is the connection code I used:
Dataset<Row> employeeDataFrame = spark.createDataFrame(employeeRecordJavaRDD, Employee.class);
employeeDataFrame.schema();
employeeDataFrame.show();
employeeDataFrame.toDF().write()
.format("com.vertica.spark.datasource.DefaultSource")
.option("table", "employee")
.option("db", "hc")
.option("user", "dbadmin")
.option("password", "xx")
.option("host", "10.3.x.1x")
.option("hdfs_url", "hdfs://hadoop:9000/user/hadoop/hc")
.option("web_hdfs_url", "webhdfs://hadoop:50070/user/hadoop/hc")
.option("dbschema", "hce")
.option("port", "5433")
.mode(SaveMode.Append).save();
I also tried
employeeDataFrame.write()
Got the same error.
Could anyone help to point out what I did wrong? Thanks.
0
Comments
Solved. Current version of connector only support spark 1.6. I was using spark 2.0