Skip to content

Huawei-Hadoop/hindex

Repository files navigation

hindex - Secondary Index for HBase

The solution is 100% Java, compatible with Apache HBase 0.94.8, and is open sourced under ASL.

Following capabilities are supported currently.

  • multiple indexes on table,
  • multi column index,
  • index based on part of a column value,
  • equals and range condition scans using index, and
  • bulk loading data to indexed table (Indexing done with bulk load).

How it works

HBase Secondary Index is 100% server side implementation with co processors which persists index data in a separate table. Indexing is region wise and custom load balancer co-locates the index table regions with actual table regions.

si1

Server reads the Index specification passed during the table creation and creates the index table. There will be one index table for one user table and all index information for that user table goes into the same index table.

Put Operation

When a row is put into the HBase (user) table, co processors prepare and put the index information in the corresponding index table. Index table rowkey = region startkey + index name + indexed column value + user table rowkey

E.g.:

Table –> tab1 column family –> cf

Index –> idx1, cf1:c1 and idx2, cf1:c2

Index table –> tab1_idx (user table name with suffix “_idx” )

si2

Scan Operation

For a user table scan, co processor creates a scanner on the index table, scans the index data and seeks to exact rows in the user table. These seeks on HFiles are based on rowkey obtained from index data. This will help to skip the blocks where data is not present and sometimes full HFiles may also be skipped.

si5

si4

Usage

Clients need to pass the IndexedHTableDescriptor with the index name and columns while creating the table

IndexedHTableDescriptor htd = new IndexedHTableDescriptor(usertableName);

IndexSpecification iSpec = new IndexSpecification(indexName);

HColumnDescriptor hcd = new HColumnDescriptor(columnFamily);

iSpec.addIndexColumn(hcd, indexColumnQualifier, ValueType.String, 10);

htd.addFamily(hcd);

htd.addIndex(iSpec);

admin.createTable(htd);

No changes required for Puts, Deletes at client side as index operations for the same are internally handled by co-processors

No change in scan code for the client app.

No need to specify the index(s) to be used. Secondary Index implementation finds the best index for Scan by analyzing the filters used for the query.

Source

This repository contains source for Secondary Index support on Apache HBase 0.94.8.

Building from source and testing

Building from source procedure is same as building HBase source hence it requires

  • Java 1.6 or later
  • Maven 3.X

Separate test source (secondaryindex\src\test\java\ )is available for running the tests on secondary indexes.

Note

Configure following configurations in hbase-site.xml for using secondary index.

Property

  • name - hbase.use.secondary.index
  • value - true
  • description - Enable this property when you are using secondary index

Property

  • name - hbase.coprocessor.master.classes
  • value - org.apache.hadoop.hbase.index.coprocessor.master.IndexMasterObserver
  • description - A comma-separated list of org.apache.hadoop.hbase.coprocessor.MasterObserver coprocessors that are loaded by default on the active HMaster process. For any implemented coprocessor methods, the listed classes will be called in order. After implementing your own MasterObserver, just put it in HBase's classpath and add the fully qualified class name here. org.apache.hadoop.hbase.index.coprocessor.master.IndexMasterObserver -defines of coprocessor hooks to support secondary index operations on master process.

Property

  • name - hbase.coprocessor.region.classes
  • value - org.apache.hadoop.hbase.index.coprocessor.regionserver.IndexRegionObserver
  • description - A comma-separated list of Coprocessors that are loaded by default on all tables. For any override coprocessor method, these classes will be called in order. After implementing your own Coprocessor, just put it in HBase's classpath and add the fully qualified class name here. A coprocessor can also be loaded on demand by setting HTableDescriptor. org.apache.hadoop.hbase.index.coprocessor.regionserver.IndexRegionObserver –class defines coprocessor hooks to support secondary index operations on Region.

Property

  • name - hbase.coprocessor.wal.classes
  • value - org.apache.hadoop.hbase.index.coprocessor.wal.IndexWALObserver
  • description - Classes which defines coprocessor hooks to support WAL operations. org.apache.hadoop.hbase.index.coprocessor.wal.IndexWALObserver – class define coprocessors hooks to support secondary index WAL operations

Future Work

  • Dynamically add/drop index
  • Integrate Secondary Index Management in the HBase Shell
  • Optimize range scan scenarios
  • HBCK tool support for Secondary index tables
  • WAL Optimizations for Secondary index table entries
  • Make Scan Evaluation Intelligence Pluggable