Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • There will be a single hive instance. possibly spanning multiple clusters (both dfs and mr)
  • There will be a single hive metastore to keep track of the table/partition locations across different clusters.
  • A table/partition can exist in more than one cluster. A table will have a single primary cluster, and can have multiple
    secondary clusters.
  • Table/Partition's metadata will be enhanced to support multiple clusters/locations of the table.
  • All the data for a table is available in the primary cluster, but a subset can be available in the secondary cluster.
    However, an object (unpartitioned table/partition) is either fully present or not present at all in the secondary cluster.
    It is not possible to have partial data of a partition in the secondary cluster.
  • The user can only update the table (or its partition) in the primary cluster.
  • The following mapping will be added. Cluster -> JobTracker
  • By default, the user will not specify any cluster for the session, and the behavior will be as follows:
    • The query will be processed in a single cluster, and use the jobtracker for that cluster.
    • If the primary cluster of any output table is different from the query processing cluster, an error is thrown.
      So, a multi-table insert with tables belonging to different primary clusters will always fail.
    • If the input's table primary cluster is different from the query processing cluster, the query will only succeed
      if all the partitions for that input table are also present on the query processing cluster.
    • If an output is specified, the primary cluster for that output will be used.
    • If the output specified is a new table, the output is not used in determining the query processing cluster.
    • If no output is specified (or the output is a new table), and there are multiple inputs for the query, all the input tables
      primary clusters are tried one-by-one, till a valid cluster is found.

There will be a default cluster for the session (a configuration parameter). Commands will be added to change the cluster.

  • Few examples will illustrate the scenario better:
  • Say T11, T12, T21, T31 are tables belonging to cluster C1, C1, C2 and C3 respectively, and it has no secondary clusters.
    • The query 'select .. from T11 .. ' will be processed in C1
    • The query 'select .. from T11 join T12 .. ' will be processed in C1
    • The query 'select .. from T21 .. ' will be processed in C2
    • The query 'select .. from T11 join T21 .. ' will fail
    • 'Insert .. T13 select .. from T11 ..' will be processed in C1 and the T13 will be created in C1
    • 'Insert .. T21 select .. from T11 ..' will fail
  • If we change the example slightly:
  • Say T11, T12, T21, T31 are tables belonging to cluster C1, C1, C2 and C3 respectively.
    T11's secondary cluster is C2 (and all the data for T11 is also present in C2).
    • The query 'select .. from T11 .. ' will be processed in C1
    • The query 'select .. from T11 join T12 .. ' will be processed in C1
    • The query 'select .. from T21 .. ' will be processed in C2
    • The query 'select .. from T11 join T21 .. ' will be processed in C2
    • The query 'select .. from T11 join T31 .. ' will fail
    • 'Insert .. T13 select .. from T11 ..' will be processed in C1 and the T13 will be created in C1
    • 'Insert .. T21 select .. from T11 ..' will be processed in C2, and T21 will remain in C2

The same idea can be extended for partitioned tables.

  • The user can also decide to run in a particular cluster.
    • Use cluster <ClusterName>
  • The system will not make an attempt to choose the cluster for the user, but only try to figure out if the query can be run
    in the <clusterName>. If the query can run in this cluster, it will succeed. Otherwise, it will fail.Use cluster <ClusterName>
  • Eventually, hive will provide some utilities to copy a table/partition from the primary cluster to the secondary clusters.
    In the first cut, the user needs to do this operation outside hive (one simple way to do so, is distcp the partition from the
    primary to the secondary cluster, and then update the metadata directly - via the thrift api).

...