Kafka connector Vertica VJDBC error cannot execute because the driver has not finished reading
Hello,
I am trying to setup the Kafka connector on a 3 node 7.2.3 system. In my single node test-environments this worked perfectly. On my production system, I get an error (below):
Commands executed (the first completes successfully and Kafka_Config tables seem correct:
/opt/vertica/packages/kafka/bin/vkconfig scheduler --add --username dbadmin --operator dbadmin --brokers 172.30.0.100:9092,172.30.0.101:9092,172.30.0.102:9092 --password xxxx
/opt/vertica/packages/kafka/bin/vkconfig topic --add --target "xxx.events" --rejection-table "public.kafka_rej" --topic prd-kafka --num-partitions 3 --username dbadmin --parser fjsonparser --password xxxx
Here is the error:
java.sql.SQLNonTransientConnectionException: [Vertica][VJDBC](100102) Statement "SELECT kpartition FROM "kafka_config".kafka_offsets_topk WHERE ktopic = ? AND target_schema ilike ? AND target_table ilike ?" cannot execute because the driver has not finished reading the current open ResultSet. The driver cannot finish reading the current ResultSet because its buffer (8192 bytes) is full. The current ResultSet must be fully iterated through or closed before another statement can execute.
at com.vertica.core.VConnection.ensureNotInLRS(Unknown Source)
at com.vertica.dataengine.VDataEngine.prepareImpl(Unknown Source)
at com.vertica.dataengine.VDataEngine.prepare(Unknown Source)
at com.vertica.dataengine.VDataEngine.prepare(Unknown Source)
at com.vertica.jdbc.common.SPreparedStatement.<init>(Unknown Source)
at com.vertica.jdbc.jdbc4.S4PreparedStatement.<init>(Unknown Source)
at com.vertica.jdbc.VerticaJdbc4PreparedStatementImpl.<init>(Unknown Source)
at com.vertica.jdbc.VJDBCObjectFactory.createPreparedStatement(Unknown Source)
at com.vertica.jdbc.common.SConnection.prepareStatement(Unknown Source)
at com.vertica.solutions.kafka.util.CountedConnection.prepareStatement(CountedConnection.java:69)
at com.vertica.solutions.kafka.cli.TopicConfigurationCLI.addConfiguration(TopicConfigurationCLI.java:154)
at com.vertica.solutions.kafka.cli.TopicConfigurationCLI.run(TopicConfigurationCLI.java:831)
at com.vertica.solutions.kafka.cli.TopicConfigurationCLI.main(TopicConfigurationCLI.java:843)
Caused by: com.vertica.util.LRSException: [Vertica][VJDBC](100102) Statement "SELECT kpartition FROM "kafka_config".kafka_offsets_topk WHERE ktopic = ? AND target_schema ilike ? AND target_table ilike ?" cannot execute because the driver has not finished reading the current open ResultSet. The driver cannot finish reading the current ResultSet because its buffer (8192 bytes) is full. The current ResultSet must be fully iterated through or closed before another statement can execute.
... 13 more
0
Comments
Hi,
Error message is "
There is a parameter called ResultBufferSize.It sets the size of the buffer the Vertica JDBC driver uses to temporarily store result sets Default value of resultbuffer size is 8kb. Please change it to some higher value and retry your query to Kafka
Please visit the below URL for steps on how to set new value to this property
https://my.vertica.com/docs/7.2.x/HTML/index.htm#Authoring/ConnectingToHPVertica/ClientJDBC/SettingAndGettingConnectionPropertyValues.htm
Hello,
Thank you so much for the reply. Issue resolved.
Drew
Hello,
I'm getting the same error when trying to add topic
java.sql.SQLNonTransientConnectionException: [Vertica][VJDBC](100102) Statement "SELECT kpartition FROM "kafka_config".kafka_offsets_topk WHERE ktopic = ? AND target_schema ilike ? AND target_table ilike ?" cannot execute because the driver has not finished reading the current open ResultSet. The driver cannot finish reading the current ResultSet because its buffer (8192 bytes) is full. The current ResultSet must be fully iterated through or closed before another statement can execute.
May I know, how do you solve it ?
How can you set ResultBufferSize to /opt/vertica/packages/kafka/bin/vkconfig topic --add ?
Many Thanks
How many columns does your target table have?
Drew
Hi, is 297 columns for my target table
Workaround:
I manually insert the records to both tables
I was able to work around the issue by creating a destination table with < 10 columns. Then add the remaining columns (about 50 in my case) after starting the connector.