This is the documentation for Cloudera Enterprise 5.8.x. Documentation for other versions is available at Cloudera Documentation.

New Features and Changes for HBase in CDH 5

CDH 5.0.x and 5.1.x each include major upgrades to HBase. Each of these upgrades provides exciting new features, as well as things to keep in mind when upgrading from a previous version.

For new features and changes introduced in older CDH 5 releases, skip to CDH 5.1 HBase Changes or CDH 5.0.x HBase Changes.

CDH 5.4 HBase Changes

CDH 5.4 introduces HBase 1.0, which represents a major upgrade to HBase. This upgrade introduces new features and moves some features which were previously marked as experimental to fully supported status. This overview provides information about the most important features, how to use them, and where to find out more information. Cloudera appreciates your feedback about these features.

Highly-Available Read Replicas

CDH 5.4 introduces highly-available read replicas. Using read replicas, clients can request, on a per-read basis, a read result using a new consistency model, timeline consistency, rather than strong consistency. The read request is sent to the RegionServer serving the region, but also to any RegionServers hosting replicas of the region. The client receives the read from the fastest RegionServer to respond, and receives an indication of whether the response was from the primary RegionServer or from a replica. See HBase Read Replicas for more details.

MultiWAL Support

CDH 5.4 introduces support for writing multiple write-ahead logs (MultiWAL) on a given RegionServer, allowing you to increase throughput when a region writes to the WAL. See Configuring HBase MultiWAL Support.

Medium-Object (MOB) Storage

CDH 5.4 introduces a mechanism for storing objects between 100 KB and 10 MB in a default configuration, or medium objects, directly in HBase. Storing objects up to 50 MB is possible with additional configuration. Previously, storing these medium objects directly in HBase could degrade performance due to write amplification caused by splits and compactions.

MOB storage requires HFile V3.

doAs Impersonation for the Thrift Gateway

Prior to CDH 5.4, the Thrift gateway could be configured to authenticate to HBase on behalf of the client as a static user. A new mechanism, doAs Impersonation, allows the client to authenticate as any HBase user on a per-call basis for a higher level of security and flexibility.

Namespace Create Authorization

Prior to CDH 5.4, only global admins could create namespaces. Now, a Namespace Create authorization can be assigned to a user, who can then create namespaces.

Authorization to List Namespaces and Tables

Prior to CDH 5.4, authorization checks were not performed on list namespace and list table operations, so you could list the names of tany tables or namespaces, regardless of your authorization. In CDH 5.4, you are not able to list namespaces or tables you do not have authorization to access.

Crunch API Changes for HBase

In CDH 5.4, Apache Crunch adds the following API changes for HBase:
  • HBaseTypes.cells() was added to support serializing HBase Cell objects.
  • Each method of HFileUtils now supports PCollection<C extends Cell>, which includes both PCollection<KeyValue> and PCollection<Cell>, on their method signatures.
  • HFileTarget, HBaseTarget, and HBaseSourceTarget each support any subclass of Cell as an output type. HFileSource and HBaseSourceTarget still return KeyValue as the input type for backward-compatibility with existing Crunch pipelines.

ZooKeeper 3.4 Is Required

HBase 1.0 requires ZooKeeper 3.4.

HBase API Changes for CDH 5.4

CDH 5.4.0 introduces HBase 1.0, which includes some major changes to the HBase APIs. Besides the changes listed above, some APIs have been deprecated in favor of new public APIs.
  • The HConnection API is deprecated in favor of Connection.
  • The HConnectionManager API is deprecated in favor of ConnectionFactory.
  • The HTable API is deprecated in favor of Table.
  • The HTableAdmin API is deprecated in favor of Admin.

HBase 1.0 API Example

Configuration conf = HBaseConfiguration.create();
try (Connection connection = ConnectionFactory.createConnection(conf)) {
  try (Table table = connection.getTable(TableName.valueOf(tablename)) {
    // use table as needed, the table returned is lightweight
  }
}

CDH 5.3 HBase Changes

CDH 5.4 introduces HBase 0.98.6, which represents a minor upgrade to HBase. CDH 5.3 provides checkAndMutate(RowMutations), in addition to existing support for atomic checkAndPut as well as checkAndDelete operations on individual rows (HBASE-11796).

SlabCache Has Been Deprecated

SlabCache, which was marked as deprecated in CDH 5.2, has been removed in CDH 5.3. To configure the BlockCache, see Configuring the HBase BlockCache.

checkAndMutate(RowMutations) API

CDH 5.3 provides checkAndMutate(RowMutations), in addition to existing support for atomic checkAndPut as well as checkAndDelete operations on individual rows (HBASE-11796).

CDH 5.2 HBase Changes

CDH 5.2 introduces HBase 0.98.6, which represents a minor upgrade to HBase. This upgrade introduces new features and moves some features which were previously marked as experimental to fully supported status. This overview provides information about the most important features, how to use them, and where to find out more information. Cloudera appreciates your feedback about these features.

JAVA_HOME must be set in your environment.

HBase now requires JAVA_HOME to be set in your environment. If it is not set, HBase will fail to start and an error will be logged. If you use Cloudera Manager, this is set automatically. If you use CDH without Cloudera Manager, JAVA_HOME should be set up as part of the overall installation. See Java Development Kit Installation for instructions on setting JAVA_HOME, as well as other JDK-specific instructions.

The default value for hbase.hstore.flusher.count has increased from 1 to 2.

The default value for hbase.hstore.flusher.count has been increased from one thread to two. This new configuration can improve performance when writing to HBase under some workloads. However, for high IO workloads two flusher threads can create additional contention when writing to HDFS. If after upgrading to CDH 5.2. you see an increase in flush times or performance degradation, lowering this value to 1 is recommended. Use the RegionServer's advanced configuration snippet for hbase-site.xml if you use Cloudera Manager, or edit the file directly otherwise.

The default value for hbase.hregion.memstore.block.multiplier has increased from 2 to 4.

The default value for hbase.hregion.memstore.block.multiplier has increased from 2 to 4 to improve both throughput and latency. If you experience performance degradation due to this change, change the value setting to 2, using the RegionServer's advanced configuration snippet for hbase-site.xml if you use Cloudera Manager, or by editing the file directly otherwise.

SlabCache is deprecated, and BucketCache is now the default block cache.

CDH 5.1 provided full support for the BucketCache block cache. CDH 5.2 deprecates usage of SlabCache in favor of BucketCache. To configure BucketCache, see BucketCache Block Cache

Changed Syntax of user_permissions Shell Command

The pattern-matching behavior for the user_permissions HBase Shell command has changed. Previously, either of the following two commands would return permissions of all known users in HBase:
  • hbase> user_permissions '*'
  • hbase> user_permissions '.*'

The first variant is no longer supported. The second variant is the only supported operation and also supports passing in other Java regular expressions.

New Properties for IPC Configuration

If the Hadoop configuration is read after the HBase configuration, Hadoop's settings can override HBase's settings if the names of the settings are the same. To avoid the risk of override, HBase has renamed the following settings (by prepending 'hbase.') so that you can set them independent of your setting for Hadoop. If you do not use the HBase-specific variants, the Hadoop settings will be used. If you have not experienced issues with your configuration, there is no need to change it.
Hadoop Configuration Property New HBase Configuration Property
ipc.server.listen.queue.size hbase.ipc.server.listen.queue.size
ipc.server.max.callqueue.size hbase.ipc.server.max.callqueue.size
ipc.server.max.callqueue.length hbase.ipc.server.max.callqueue.length
ipc.server.read.threadpool.size hbase.ipc.server.read.threadpool.size
ipc.server.tcpkeepalive hbase.ipc.server.tcpkeepalive
ipc.server.tcpnodelay hbase.ipc.server.tcpnodelay
ipc.client.call.purge.timeout hbase.ipc.client.call.purge.timeout
ipc.client.connection.maxidletime hbase.ipc.client.connection.maxidletime
ipc.client.idlethreshold hbase.ipc.client.idlethreshold
ipc.client.kill.max hbase.ipc.client.kill.max

Snapshot Manifest Configuration

Snapshot manifests were previously a feature included in HBase in CDH 5 but not in Apache HBase. They are now included in Apache HBase 0.98.6. To use snapshot manifests, you now need to set hbase.snapshot.format.version to 2 in hbase-site.xml. This is the default for HBase in CDH 5.2, but the default for Apache HBase 0.98.6 is 1. To edit the configuration, use an Advanced Configuration Snippet if you use Cloudera Manager, or edit the file directly otherwise. The new snapshot code can read both version 1 and 2. However, if you use version 2, you will not be able to read these snapshots on HBase versions prior to 0.98.

Not using manifests (setting hbase.snapshot.format.version to 1) can cause excess load on the NameNode and impact performance.

Tags

Tags, which allow metadata to be stored in HFiles alongside cell data, are a feature of HFile version 3, are needed for per-cell access controls and visibility labels. Tags were previously considered an experimental feature but are now fully supported.

Per-Cell Access Controls

Per-cell access controls were introduced as an experimental feature in CDH 5.1 and are fully supported in CDH 5.2. You must use HFile version 3 to use per-cell access controls. For more information about access controls, see Per-Cell Access Controls.

Experimental Features

  Warning: These features are still considered experimental. Experimental features are not supported and Cloudera does not recommend using them in production environments or with important data.

Visibility Labels

You can now specify a list of visibility labels, such as CONFIDENTIAL, TOPSECRET, or PUBLIC, at the cell level. You can associate users with these labels to enforce visibility of HBase data. These labels can be grouped into complex expressions using logical operators &, |, and ! (AND, OR, NOT). A given user is associated with a set of visibility labels, and the policy for associating the labels is pluggable. A coprocessor, org.apache.hadoop.hbase.security.visibility.DefaultScanLabelGenerator, checks for visibility labels on cells that would be returned by a Get or Scan and drops the cells that a user is not authorized to see, before returning the results. The same coprocessor saves visibility labels as tags, in the HFiles alongside the cell data, when a Put operation includes visibility labels. You can specify custom implementations of ScanLabelGenerator by setting the property hbase.regionserver.scan.visibility.label.generator.class to a comma-separated list of classes in hbase-site.xml. To edit the configuration, use an Advanced Configuration Snippet if you use Cloudera Manager, or edit the file directly otherwise.

No labels are configured by default. You can add a label to the system using either the VisibilityClient#addLabels() API or the add_label shell command. Similar APIs and shell commands are provided for deleting labels and assigning them to users. Only a user with superuser access (the hbase.superuser access level) can perform these operations.

To assign a visibility label to a cell, you can label the cell using the API method Mutation#setCellVisibility(new CellVisibility(<labelExp>));. An API is provided for managing visibility labels, and you can also perform many of the operations using HBase Shell.

Previously, visibility labels could not contain the symbols &, |, !, ( and ), but this is no longer the case.

For more information about visibility labels, see the Visibility Labels section of the Apache HBase Reference Guide.

If you use visibility labels along with access controls, you must ensure that the Access Controller is loaded before the Visibility Controller in the list of coprocessors. This is the default configuration. See HBASE-11275.

Visibility labels are an experimental feature introduced in CDH 5.1, and still experimental in CDH 5.2.

Transparent Server-Side Encryption

Transparent server-side encryption can now be enabled for both HFiles and write-ahead logs (WALs), to protect their contents at rest. To configure transparent encryption, first create an encryption key, then configure the appropriate settings in hbase-site.xml . To edit the configuration, use an Advanced Configuration Snippet if you use Cloudera Manager, or edit the file directly otherwise. See the Transparent Encryption section in the Apache HBase Reference Guide for more information.
Transparent server-side encryption is an experimental feature introduced in CDH 5.1, and still experimental in CDH 5.2.

Stripe Compaction

Stripe compaction is a compaction scheme that segregates the data inside a region by row key, creating "stripes" of data which are visible within the region but transparent to normal operations. This striping improves read performance in common scenarios and greatly reduces variability by avoiding large or inefficient compactions.

Configuration guidelines and more information are available at Stripe Compaction.

To configure stripe compaction for a single table from within the HBase shell, use the following syntax.
alter <table>, CONFIGURATION => {<setting> => <value>}
        Example: alter 'orders', CONFIGURATION => {'hbase.store.stripe.fixed.count' => 10}
To configure stripe compaction for a column family from within the HBase shell, use the following syntax.
alter <table>, {NAME => <column family>, CONFIGURATION => {<setting => <value>}}
        Example: alter 'logs', {NAME => 'blobs', CONFIGURATION => {'hbase.store.stripe.fixed.count' => 10}}

Stripe compaction is an experimental feature in CDH 5.1, and still experimental in CDH 5.2.

CDH 5.1 HBase Changes

CDH 5.1 introduces HBase 0.98, which represents a major upgrade to HBase. This upgrade introduces several new features, including a section of features which are considered experimental and should not be used in a production environment. This overview provides information about the most important features, how to use them, and where to find out more information. Cloudera appreciates your feedback about these features.

In addition to HBase 0.98, Cloudera has pulled in changes from HBASE-10883, HBASE-10964, HBASE-10823, HBASE-10916, and HBASE-11275. Implications of these changes are detailed below and in the Release Notes.

BucketCache Block Cache

A new offheap BlockCache implementation, BucketCache, was introduced as an experimental feature in CDH 5 Beta 1, and is now fully supported in CDH 5.1. BucketCache can be used in either of the following two configurations:
  • As a CombinedBlockCache with both onheap and offheap caches.
  • As an L2 cache for the default onheap LruBlockCache

BucketCache requires less garbage-collection than SlabCache, which is the other offheap cache implementation in HBase. It also has many optional configuration settings for fine-tuning. All available settings are documented in the API documentation for CombinedBlockCache. Following is a simple example configuration.

  1. First, edit hbase-env.sh and set -XX:MaxDirectMemorySize to the total size of the desired onheap plus offheap, in this case, 5 GB (but expressed as 5G). To edit the configuration, use an Advanced Configuration Snippet if you use Cloudera Manager, or edit the file directly otherwise.
    -XX:MaxDirectMemorySize=5G
  2. Next, add the following configuration to hbase-site.xml. To edit the configuration, use an Advanced Configuration Snippet if you use Cloudera Manager, or edit the file directly otherwise. This configuration uses 80% of the -XX:MaxDirectMemorySize (4 GB) for offheap, and the remainder (1 GB) for onheap.
    <property>
      <name>hbase.bucketcache.ioengine</name>
      <value>offheap</value>
    </property>
    <property>
      <name>hbase.bucketcache.percentage.in.combinedcache</name>
      <value>0.8</value>
    </property>
    <property>
      <name>hbase.bucketcache.size</name>
      <value>5120</value>
    </property>
  3. Restart or rolling restart your cluster for the configuration to take effect.

Access Control for EXEC Permissions

A new access control level has been added to check whether a given user has EXEC permission. This can be specified at the level of the cluster, table, row, or cell.

To use EXEC permissions, perform the following procedure.
  • Install the AccessController coprocessor either as a system coprocessor or on a table as a table coprocessor.
  • Set the hbase.security.exec.permission.checks configuration setting in hbase-site.xml to true To edit the configuration, use an Advanced Configuration Snippet if you use Cloudera Manager, or edit the file directly otherwise..

For more information on setting and revoking security permissions, see the Access Control section of the Apache HBase Reference Guide.

Reverse Scan API

A reverse scan API has been introduced. This allows you to scan a table in reverse. Previously, if you wanted to be able to access your data in either direction, you needed to store the data in two separate tables, each ordered differently. This feature was implemented in HBASE-4811.

To use the reverse scan feature, use the new Scan.setReversed(boolean reversed) API. If you specify a startRow and stopRow, to scan in reverse, the startRow needs to be lexicographically after the stopRow. See the Scan API documentation for more information.

MapReduce Over Snapshots

You can now run a MapReduce job over a snapshot from HBase, rather than being limited to live data. This provides the ability to separate your client-side work load from your live cluster if you need to run resource-intensive MapReduce jobs and can tolerate using potentially-stale data. You can either run the MapReduce job on the snapshot within HBase, or export the snapshot and run the MapReduce job against the exported file.

Running a MapReduce job on an exported file outside of the scope of HBase relies on the permissions of the underlying filesystem and server, and bypasses ACLs, visibility labels, and encryption that may otherwise be provided by your HBase cluster.

A new API, TableSnapshotInputFormat, is provided. For more information, see TableSnapshotInputFormat.

MapReduce over snapshots was introduced in CDH 5.0.

Stateless Streaming Scanner over REST

A new stateless streaming scanner is available over the REST API. Using this scanner, clients do not need to restart a scan if the REST server experiences a transient failure. All query parameters are specified during the REST request. Query parameters include startrow, endrow, columns, starttime, endtime, maxversions, batchtime, and limit. Following are a few examples of using the stateless streaming scanner.

Scan the entire table, return the results in JSON.
curl -H "Accept: application/json" https://localhost:8080/ExampleScanner/*
Scan the entire table, return the results in XML.
curl -H "Content-Type: text/xml" https://localhost:8080/ExampleScanner/*
Scan only the first row.
curl -H "Content-Type: text/xml" \
https://localhost:8080/ExampleScanner/*?limit=1
Scan only specific columns.
curl -H "Content-Type: text/xml" \
https://localhost:8080/ExampleScanner/*?columns=a:1,b:1
Scan for rows between starttime and endtime.
curl -H "Content-Type: text/xml" \
https://localhost:8080/ExampleScanner/*?starttime=1389900769772\
&endtime=1389900800000
Scan for a given row prefix.
curl -H "Content-Type: text/xml" https://localhost:8080/ExampleScanner/test*

For full details about the stateless streaming scanner, see the API documentation for this feature.

Delete Methods of Put Class Now Use Constructor Timestamps

The Delete() methods of the Put class of the HBase Client API previously ignored the constructor's timestamp, and used the value of HConstants.LATEST_TIMESTAMP. This behavior was different from the behavior of the add() methods. The Delete() methods now use the timestamp from the constructor, creating consistency in behavior across the Put class. See HBASE-10964.

Experimental Features

  Warning: These features are still considered experimental. Experimental features are not supported and Cloudera does not recommend using them in production environments or with important data.

Visibility Labels

You can now specify a list of visibility labels, such as CONFIDENTIAL, TOPSECRET, or PUBLIC, at the cell level. You can associate users with these labels to enforce visibility of HBase data. These labels can be grouped into complex expressions using logical operators &, |, and ! (AND, OR, NOT). A given user is associated with a set of visibility labels, and the policy for associating the labels is pluggable. A coprocessor, org.apache.hadoop.hbase.security.visibility.DefaultScanLabelGenerator, checks for visibility labels on cells that would be returned by a Get or Scan and drops the cells that a user is not authorized to see, before returning the results. The same coprocessor saves visibility labels as tags, in the HFiles alongside the cell data, when a Put operation includes visibility labels. You can specify custom implementations of ScanLabelGenerator by setting the property hbase.regionserver.scan.visibility.label.generator.class to a comma-separated list of classes.

No labels are configured by default. You can add a label to the system using either the VisibilityClient#addLabels() API or the add_label shell command. Similar APIs and shell commands are provided for deleting labels and assigning them to users. Only a user with superuser access (the hbase.superuser access level) can perform these operations.

To assign a visibility label to a cell, you can label the cell using the API method Mutation#setCellVisibility(new CellVisibility(<labelExp>));.

Visibility labels and request authorizations cannot contain the symbols &, |, !, ( and ) because they are reserved for constructing visibility expressions. See HBASE-10883.

For more information about visibility labels, see the Visibility Labels section of the Apache HBase Reference Guide.

If you use visibility labels along with access controls, you must ensure that the Access Controller is loaded before the Visibility Controller in the list of coprocessors. This is the default configuration. See HBASE-11275.

To use per-cell access controls or visibility labels, you must use HFile version 3. To enable HFile version 3, add the following to hbase-site.xml, using an advanced code snippet if you use Cloudera Manager, or directly to the file if your deployment is unmanaged.. Changes will take effect after the next major compaction.
<property>
  <name>hfile.format.version</name>
  <value>3</value>
</property>

Visibility labels are an experimental feature introduced in CDH 5.1.

Per-Cell Access Controls

You can now specify access control levels at the per-cell level, as well as at the level of the cluster, table, or row.
A new parent class has been provided, which encompasses Get, Scan, and Query. This change also moves the getFilter and setFilter methods of Get and Scan to the common parent class. Client code may need to be recompiled to take advantage of per-cell ACLs. See the Access Control section of the Apache HBase Reference Guide for more information.
The ACLS for cells having timestamps in the future are not considered for authorizing the pending mutation operations. See HBASE-10823.
If you use visibility labels along with access controls, you must ensure that the Access Controller is loaded before the Visibility Controller in the list of coprocessors. This is the default configuration.
To use per-cell access controls or visibility labels, you must use HFile version 3. To enable HFile version 3, add the following to hbase-site.xml, using an advanced code snippet if you use Cloudera Manager, or directly to the file if your deployment is unmanaged.. Changes will take effect after the next major compaction.
<property>
  <name>hfile.format.version</name>
  <value>3</value>
</property>
Per-cell access controls are an experimental feature introduced in CDH 5.1.

Transparent Server-Side Encryption

Transparent server-side encryption can now be enabled for both HFiles and write-ahead logs (WALs), to protect their contents at rest. To configure transparent encryption, first create an encryption key, then configure the appropriate settings in hbase-site.xml . See the Transparent Encryption section in the Apache HBase Reference Guide for more information.
Transparent server-side encryption is an experimental feature introduced in CDH 5.1.

Stripe Compaction

Stripe compaction is a compaction scheme that segregates the data inside a region by row key, creating "stripes" of data which are visible within the region but transparent to normal operations. This striping improves read performance in common scenarios and greatly reduces variability by avoiding large or inefficient compactions.

Configuration guidelines and more information are available at Stripe Compaction.

To configure stripe compaction for a single table from within the HBase shell, use the following syntax.
alter <table>, CONFIGURATION => {<setting> => <value>}
        Example: alter 'orders', CONFIGURATION => {'hbase.store.stripe.fixed.count' => 10}
To configure stripe compaction for a column family from within the HBase shell, use the following syntax.
alter <table>, {NAME => <column family>, CONFIGURATION => {<setting => <value>}}
        Example: alter 'logs', {NAME => 'blobs', CONFIGURATION => {'hbase.store.stripe.fixed.count' => 10}}

Stripe compaction is an experimental feature in CDH 5.1.

Distributed Log Replay

After a RegionServer fails, its failed region is assigned to another RegionServer, which is marked as "recovering" in ZooKeeper. A SplitLogWorker directly replays edits from the WAL of the failed RegionServer to the region at its new location. When a region is in "recovering" state, it can accept writes but no reads (including Append and Increment), region splits or merges. Distributed Log Replay extends the distributed log splitting framework. It works by directly replaying WAL edits to another RegionServer instead of creating recovered.edits files.
Distributed log replay provides the following advantages over using the current distributed log splitting functionality on its own.
  • It eliminates the overhead of writing and reading a large number of recovered.edits files. It is not unusual for thousands of recovered.edits files to be created and written concurrently during a RegionServer recovery. Many small random writes can degrade overall system performance.
  • It allows writes even when a region is in recovering state. It only takes seconds for a recovering region to accept writes again.
To enable distributed log replay, set hbase.master.distributed.log.replay to true. You must also enable HFile version 3. Distributed log replay is unsafe for rolling upgrades.

Distributed log replay is an experimental feature in CDH 5.1.

CDH 5.0.x HBase Changes

HBase in CDH 5.0.x is based on the Apache HBase 0.96 release. When upgrading to CDH 5.0.x, keep the following in mind.

Wire Compatibility

HBase in CDH 5.0.x (HBase 0.96) is not wire compatible with CDH 4 (based on 0.92 and 0.94 releases). Consequently, rolling upgrades from CDH 4 to CDH 5 are not possible because existing CDH 4 HBase clients cannot make requests to CDH 5 servers and CDH 5 HBase clients cannot make requests to CDH 4 servers. Clients of the Thrift and REST proxy servers, however, retain wire compatibility between CDH 4 and CDH 5.

Upgrade is Not Reversible

The upgrade from CDH 4 HBase to CDH 5 HBase is irreversible and requires HBase to be shut down completely. Executing the upgrade script reorganizes existing HBase data stored on HDFS into new directory structures, converts HBase 0.90 HFile v1 files to the improved and optimized HBase 0.96 HFile v2 file format, and rewrites the hbase.version file. This upgrade also removes transient data stored in ZooKeeper during the conversion to the new data format.

These changes were made to reduce the impact in future major upgrades. Previously HBase used brittle custom data formats and this move shifts HBase's RPC and persistent data to a more evolvable Protocol Buffer data format.

API Changes

The HBase User API (Get, Put, Result, Scanner etc; see Apache HBase API documentation) has evolved and attempts have been made to make sure the HBase Clients are source code compatible and thus should recompile without needing any source code modifications. This cannot be guaranteed however, since with the conversion to Protocol Buffers (ProtoBufs), some relatively obscure APIs have been removed. Rudimentary efforts have also been made to preserve recompile compatibility with advanced APIs such as Filters and Coprocessors. These advanced APIs are still evolving and our guarantees for API compatibility are weaker here.

For information about changes to custom filters, see Custom Filters.

As of 0.96, the User API has been marked and all attempts at compatibility in future versions will be made. A version of the javadoc that only contains the User API can be found here.

HBase Metrics Changes

HBase provides a metrics framework based on JMX beans. Between HBase 0.94 and 0.96, the metrics framework underwent many changes. Some beans were added and removed, some metrics were moved from one bean to another, and some metrics were renamed or removed. Click here to download the CSV spreadsheet which provides a mapping.

Custom Filters

If you used custom filters written for HBase 0.94, you need to recompile those filters for HBase 0.96. The custom filter must be altered to fit with the newer interface that uses protocol buffers. Specifically two new methods, toByteArray(…) and parseFrom(…), which are detailed in detailed in the Filter API. These should be used instead of the old methods write(…) and readFields(…), so that protocol buffer serialization is used. To see what changes were required to port one of HBase's own custom filters, see the Git commit that represented porting the SingleColumnValueFilter filter.

Checksums

In CDH 4, HBase relied on HDFS checksums to protect against data corruption. When you upgrade to CDH 5, HBase checksums are now turned on by default. With this configuration, HBase reads data and then verifies the checksums. Checksum verification inside HDFS will be switched off. If the HBase-checksum verification fails, then the HDFS checksums are used instead for verifying data that is being read from storage. Once you turn on HBase checksums, you will not be able to roll back to an earlier HBase version.

You should see a modest performance gain after setting hbase.regionserver.checksum.verify to true for data that is not already present in the RegionServer's block cache.

To enable or disable checksums, modify the following configuration properties in hbase-site.xml. To edit the configuration, use an Advanced Configuration Snippet if you use Cloudera Manager, or edit the file directly otherwise.

<property>
  <name>hbase.regionserver.checksum.verify</name>
  <value>true</value>
  <description>
      If set to  true, HBase will read data and then verify checksums  for
      hfile blocks. Checksum verification inside HDFS will be switched off.
      If the hbase-checksum verification fails, then it will  switch back to
      using HDFS checksums.
  </description>
</property>
The default value for the hbase.hstore.checksum.algorithm property has also changed to CRC32. Previously, Cloudera advised setting it to NULL due to performance issues which are no longer a problem.
<property>
   <name>hbase.hstore.checksum.algorithm</name>
   <value>CRC32</value>
   <description>
     Name of an algorithm that is used to compute checksums. Possible values
     are NULL, CRC32, CRC32C.
   </description>
 </property>
Page generated July 8, 2016.