spark vertica connector architecture problem
i noticed for something bad in how the connector works.
just for mention
i have big table on vertica , segmented by key on all nodes....
as your documentation said , spark rdd query data from vertica as how the table segmented and spark partition
is build on top of that.
but as i said, when i print each parition data after quering segmented table, i alaways get 1 partition with all the data from a random server and the rest contain 0 rows !
its looks like vertica connector merge all the data to random node instead of leave it disterbuted as the servers as documentation said.....
its really important for me to know if im doing something worng
waiting for respone