Options

How does Eon implement K-safety and segmented projections?

Answers

  • Options
    MarkHMarkH Employee

    It's a great question, since the concepts do shift a little in meaning between enterprise and eon mode.

    Segmented projections are a way of having n nodes each "own" 1/nth of the data in a projection. In enterprise mode this ownership is extremely literal; each node will physically store 1/nth of the data on its local disk and also have the corresponding catalog metadata in that node's local catalog. In eon-mode it's a little more abstract. No node will truly own the data files themselves -- they'll exist on the communal storage location for use by any/all nodes -- but the n nodes will split the responsibility over the catalog metadata representing those data files. An eon segmented projection will be sharded into n shards (shard count is chosen at database creation time and is unable to be changed) and this more-or-less determines how many nodes will collaborate on each query against a segmented projection. Then an individual node can subscribe to one or more of these shards.

    A ksafety of 1 means that Vertica can tolerate any one node failing and still be able to run any arbitrary query. In enterprise mode, this essentially becomes a requirement that we have two copies of the data files stored on two different nodes, which gives rise to the concept of buddy pair projections. If a node goes down and I lose access to a segment of the data in projection _b0, I can still access that same segment by going to the next node and looking in the _b1 projection. However, in eon mode this changes. We still will maintain the fundamental ksafety promise that k1 means any failure will still have full query availability, but the eon architecture doesn't lead down the same path to buddy projections and two copies of data. Instead, eon mode will save only one final copy of the data on the communal storage location, and then the key is to have multiple nodes in the database know about the existence and purpose of that communal storage file. In technical terms, that means we need to have multiple nodes subscribe to each shard (more-or-less, a shard is 1/nth of the storage-related catalog metadata). In a simple example of a 3-node 3-shard 1-subcluster eon mode database, I might have a node subscription layout like:
    Node01 is subscribed to Shard1, Shard2
    Node02 is subscribed to Shard2, Shard3
    Node03 is subscribed to Shard3, Shard1
    Then if Node01 goes down I can still find a complete set of storage catalog objects by looking in the shard catalogs of Node02 and Node03. This likely looks similar to an enterprise-style projection layout. We've achieved k1 by redundant catalog subscriptions rather than redundant copies of the raw data, which is nice since it's much smaller in volume (and avoids the pain of recovery after these disaster events). Admittedly, predicting the complete subscription layout gets complex when more advanced eon features are used like multiple subclusters or secondary subclusters, but the query availability promise remains.

    Hope that helps,
    Mark Hayden

Leave a Comment

BoldItalicStrikethroughOrdered listUnordered list
Emoji
Image
Align leftAlign centerAlign rightToggle HTML viewToggle full pageToggle lights
Drop image/file