This is the documentation for Cloudera Enterprise 5.8.x. Documentation for other versions is available at Cloudera Documentation.

Procedure for Rolling Back a CDH 4-to-CDH 5 Upgrade

You can roll back to CDH 4 after upgrading to CDH 5 only if the HDFS upgrade has not been finalized. The rollback restores your CDH cluster to the state it was in before the upgrade, including Kerberos and TLS/SSL configurations. Data created after the upgrade is lost.

  Important: When performing these rollback steps, use backups taken before you started the upgrade. For steps where you need to restore the contents of a directory, clear the contents of the directory before copying the backed-up files to the directory. If you fail to do this, artifacts from the original upgrade can cause problems if you attempt the upgrade again after the rollback.

Rollback Steps

Determining the Last Active NameNode

If your cluster has high availability for HDFS enabled, determine which NameNode host was last active:
  1. In Cloudera Manager, select Clusters > HDFS > Instances.

    A list of role types and hosts displays.

  2. Look for the Role Type NameNode (Active).

    The NameNode that is not active is called the standby NameNode and displays in the list of Role Types as NameNode (Standby).

Downgrading the Software

  1. Stop the cluster:
    1. On the Home > Status tab, click to the right of the cluster name and select Stop.
    2. Click Stop in the confirmation screen. The Command Details window shows the progress of stopping services.

      When All services successfully stopped appears, the task is complete and you can close the Command Details window.

  2. Downgrade the JDK, if necessary.

    When you upgraded your cluster to CDH 5, you were required to also upgrade the JDK. If you are rolling back your cluster to Cloudera Manager version 4.6.x or lower and CDH version 4.3 .x or lower, Cloudera recommends that you downgrade your JDK on all hosts to the version deployed before the upgrade, or to JDK 1.6. You can also choose to run your cluster using the version of the JDK you installed during the upgrade. To verify that the JDK you choose is supported by Cloudera Manager and CDH, see CDH 5 and Cloudera Manager 5 Requirements and Supported Versions.

    You can have two versions of a JDK on a host machine, but you must make sure the correct version is used by the software. See: Java Development Kit Installation.

  3. Depending on whether your cluster was installed using parcels or packages, do one of the following:
    • Parcels
      1. Log in to the Cloudera Manager Admin Console.
      2. Select Hosts > Parcels.

        A list of parcels displays.

      3. Locate the CDH 4 parcel and click Activate. (This automatically deactivates the CDH 5 parcel.) See Activating a Parcel for more information. If the parcel is not available, use the Download button to download the parcel.
      4. If you include any additional components in your cluster, such as Search or Impala, click Activate for those parcels.
        Important:

      Do not start any services.

      If you accidentally restart services, stop your cluster before proceeding.

    • Packages
      1. Log in as a privileged user to all hosts in your cluster.
      2. Run the following command to uninstall CDH 5:
        Operating System Command
        RHEL $ sudo yum remove bigtop-jsvc bigtop-utils bigtop-tomcat hue-common sqoop2-client hbase-solr-doc solr-doc
        SLES $ sudo zypper remove bigtop-jsvc bigtop-utils bigtop-tomcat hue-common sqoop2-client hbase-solr-doc solr-doc
        Ubuntu or Debian $ sudo apt-get purge bigtop-jsvc bigtop-utils bigtop-tomcat hue-common sqoop2-client hbase-solr-doc solr-doc
      3. Remove the CDH 5 repository files from the system repository directory. For example, on a RHEL or similar system, remove all files in /etc/yum.repos.d that have cloudera as part of the name. (Make sure that you have backed up these files, as instructed in Backing Up CDH 4 and Cloudera Manager Repository Files.)
      4. Restore the CDH 4 repository files that you previously backed up to the repository directory.
      5. Re-install the CDH 4 packages using the same installation path you used for the initial installation; see Installing Cloudera Manager and CDH. Repeat only the installation steps and not the configuration steps. (The configurations are already stored in Cloudera Manager.) Make sure that you include any additional components used in your cluster, such as MapReduce 1, YARN, Spark, Pig, Sqoop, or Impala.

Rolling Back ZooKeeper

  1. Restore the contents of the dataDir on each ZooKeeper server. These files are located in a directory specified with the dataDir property in the ZooKeeper configuration. The default location is /var/lib/zookeeper.
  2. Make sure that the permissions of all the directories and files are as they were before the upgrade.
  3. Start ZooKeeper using Cloudera Manager.

Rolling Back HDFS

You cannot roll back HDFS while high availability is enabled. The rollback procedure in this topic creates a temporary configuration without high availability. Regardless of whether high availability is enabled, follow the steps in this section to create the temporary configuration, and to re-enable high availability, follow the steps in Re-enabling HDFS with High Availability.

  1. Create a rollback configuration directory on each host that has the HDFS role; for example /etc/hadoop/conf.rollback.
  2. Create a core-site.xml file in this directory on all hosts with the HDFS role. The <value> element in this file references the NameNode host. For HDFS with high availability enabled, choose the last active NameNode host (see Determining the Last Active NameNode). The file needs to contain only the following:
    <configuration>
      <property>
        <name>fs.defaultFS</name>
        <value>hdfs://NameNode_host:port</value>
        <!-- For example:
          hdfs://a1234.cloudera.com:8020
        -->
      </property>
    </configuration>
    
  3. Create an hdfs-site.xml file in the rollback configuration directory on all hosts with the HDFS role. The file specifies values for the following properties:
    • dfs.namenode.name.dir
    • dfs.datanode.data.dir
    • dfs.namenode.checkpoint.dir

      You can omit the dfs.namenode.checkpoint.dir property if high availability was enabled for HDFS in your cluster.

    To find values you need to create the hdfs-site.xml file:
    1. Open Cloudera Manager.
    2. Go to the HDFS service.
    3. Click the Configuration tab.
    4. Enter the property name in the search field.
    5. Copy each path defined for the property into the <value> element for the property. Precede each path with file:// and separate each directory path with a comma.
    For example:
    <configuration>
      <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:///data/1/dfs/nn,file:///data/2/dfs/nn</value>
      </property>
      <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:///data/1/dfs/dn,file:///data/2/dfs/dn,file:///data/3/dfs/dn,
               file:///data/4/dfs/dn,file:///data/5/dfs/dn
        </value>
      </property>
      <property>
        <name>dfs.namenode.checkpoint.dir</name>
        <value>file:///data/1/dfs/snn</value>
      </property>
    </configuration>
    
  4. Restore the storageID values from the VERSION files. On each DataNode, edit the VERSION file in each data directory and replace the value of the storageID field with the value from the VERSION file you backed up from the same DataNode. The storageID is the same for each data directory on a DataNode host.
      Important:

    Restore the storageID on all data directories on all DataNodes.

    Do not restore the entire VERSION file.

  5. Copy the /etc/hadoop/conf/log4j.properties file to the /etc/hadoop/conf.rollback directory (the rollback configuration directory you created previously).
  6. Verify that the cluster is now running CDH 4 by running the following command on all Hadoop nodes in your cluster:
    hadoop version
    The version number appears right after the string cdh. For example:
    $ hadoop version
    Hadoop 2.0.0-cdh4.7.1
    ...
  7. Run the following command on the NameNode host. (If high availability is enabled for HDFS, run the command on the active NameNode.)
    sudo -u hdfs hdfs --config /etc/hadoop/conf.rollback namenode -rollback

    This command also starts the NameNode.

  8. While the NameNode is running, run the following command on all DataNode hosts:
    sudo -u hdfs hdfs --config /etc/hadoop/conf.rollback datanode -rollback

    This command also starts the DataNodes.

  9. If your cluster uses a Secondary NameNode, do the following on the Secondary NameNode host:
    1. Delete the directory defined with the dfs.namenode.checkpoint.dir parameter.
    2. Run the following command while the active NameNode and all DataNodes are running:
      sudo -u hdfs hdfs --config /etc/hadoop/conf.rollback secondarynamenode -format
  10. When rollback is completed, stop all daemons by typing CTRL-C in all open terminal sessions (the NameNode hosts and DataNode hosts on which you ran the hdfs commands). You can monitor the progress of the rollback by opening the NameNode web interface in a web browser at NameNode_Host:50070. On that page, when all the blocks are reported and the NameNode no longer reports that it is in Safemode, the rollback is complete.
  11. If your cluster does not have high availability enabled for HDFS, use the Cloudera Manager Admin Console to start the HDFS service and then continue your cluster rollback with Rolling Back HBase:
    1. Go to the HDFS service.
    2. Select Actions > Start.

    If your cluster has high availability enabled for HDFS, continue with the next section to remove the temporary non-HA configuration and re-enable high availability.

Re-enabling HDFS with High Availability

The rollback steps performed so far have restored all of the DataNodes and one of the NameNodes. In a high-availability configuration, you also need to roll back the other (standby) NameNode and the JournalNodes.

  1. On each JournalNode, restore the backed-up edits directory to the same location.
  2. On the standby NameNode (the NameNode host that has not been rolled back), delete the directories specified in the dfs.namenode.name.dir property. Failing to delete these directories causes the rollback to fail.
  3. Using Cloudera Manager, start the last active NameNode:
    1. Go to the HDFS service.
    2. Click the Instances tab.
    3. Select NameNode(Active) from the list of Role Types.
    4. Select Actions for Selected > Start.
  4. Using Cloudera Manager, bootstrap the standby NameNode:
    1. Go to the HDFS service.
    2. Click the Instances tab.
    3. Select NameNode(Standby).
    4. Select Actions > Bootstrap Standby NameNode.

    The fsImage is restored from the last active NameNode to the standby NameNode.

  5. Restart the HDFS service:
    1. Go to the HDFS service.
    2. Select Actions > Restart.

Rolling Back HBase

No additional steps are required to roll back HBase. When you rolled back HDFS, all of the HBase data was also rolled back.

Using Cloudera Manager, start HBase:
  1. Go to the HBase service.
  2. Select Actions > Start.
If you encounter errors when starting HBase, delete the znode in ZooKeeper and then start HBase again:
  1. In Cloudera Manager, look up the value of the zookeeper.znode.parent property. The default value is /hbase.
  2. Connect to the ZooKeeper ensemble by running the following command from any HBase gateway host:
    zookeeper-client -server zookeeper_ensemble

    To find the value to use for zookeeper_ensemble, open the /etc/hbase/conf/hbase-site.xml file on any HBase gateway host. Use the value of the hbase.zookeeper.quorum property.

      Note:

    If you have deployed a secure cluster, you must connect to ZooKeeper using a client jaas.conf file. You can find such a file in an HBase process directory (/var/run/cloudera-scm-agent/process/). Specify the jaas.conf using the JVM flags by running the following commands in the ZooKeeper client:

    CLIENT_JVMFLAGS=
     "-Djava.security.auth.login.config=/var/run/cloudera-scm-agent/process/HBase_process_directory/jaas.conf"
    
    zookeeper-client -server <zookeeper_ensemble>

    The ZooKeeper command-line interface opens.

  3. Enter the following command:
    rmr /hbase

Rolling Back Hive

Restore the Hive metastore database from your backup. See the documentation for your database for details.

Rolling Back Oozie

Restore the Oozie database from your backup. See the documentation for your database for details.

Rolling Back Search

Restore the contents of the /var/lib/solr directory from your backup. Ensure that the file permissions are the same as they were before rolling back. (The permissions are typically solr:solr.)

Rolling Back Sqoop 2

Restore the Sqoop 2 database from your backup. See the documentation for your database for details.

If you are not using the default embedded Derby database for Sqoop 2, restore the database you have configured for Sqoop 2. Otherwise, restore the repository subdirectory of the Sqoop 2 metastore directory from your backup. This location is specified with the Sqoop 2 Server Metastore Directory property. The default location is /var/lib/sqoop2. For this default location, Derby database files are located in /var/lib/sqoop2/repository.

Rolling Back Hue

  1. Restore the Hue database from your backup.
  2. Restore the file, app.reg, from your backup. Place it in the location of your Hue installation (referred to as HUE_HOME). For package installs, this is usually /usr/lib/hue; for parcel installs, this is usually, /opt/cloudera/parcels/<parcel version>/lib/hue/.
  3. Using Cloudera Manager, install the Beeswax role:
    1. Go to the Hue service.
    2. Select the Instances tab.
    3. Click Add.
    4. Locate the row for the host with the Hue Server role and select Beeswax Server.
    5. Click Continue.

Rolling Back Cloudera Manager

After you complete the rollback steps, your cluster is using Cloudera Manager 5 to manage your CDH 4 cluster. You can continue to use Cloudera Manager 5 to manage your CDH 4 cluster, or you can downgrade to Cloudera Manager 4 by following these steps:

  1. Stop your CDH cluster using Cloudera Manager:
    1. Go to the Home page.
    2. In the drop-down list next to your cluster, select Stop.
  2. Stop Cloudera Management Services:
    1. Select Clusters > Cloudera Management Service.
    2. Select Actions > Stop
  3. Stop Cloudera Manager Server by running the following command on the Cloudera Manager Server host:
    sudo service cloudera-scm-server stop
  4. Stop the Cloudera Manager Agents by running the following command on all hosts in your cluster:
    sudo service cloudera-scm-agent stop
  5. Downgrade the JDK, if necessary.

    If you are rolling back to Cloudera Manager version 4.6.x or lower, downgrade the version of your JDK to the version deployed before the upgrade, or the latest Oracle upgrade for JDK 1.6. You can also choose to run Cloudera Manager using the version of the JDK you installed during the upgrade. To verify that the JDK you choose is supported by Cloudera Manager and CDH, see CDH 5 and Cloudera Manager 5 Requirements and Supported Versions.

    You can keep two versions of a JDK on a host machine, but you must make sure the correct version is used by the software. See Java Development Kit Installation and Configuring a Custom Java Home Location for information about specifying a JDK.

  6. Downgrade the software:
    1. Run the following command on the Cloudera Manager Server host:
      Operating System Command
      RHEL $ sudo yum remove cloudera-manager-server
      SLES $ sudo zypper remove cloudera-manager-server
      Ubuntu or Debian $ sudo apt-get purge cloudera-manager-server
    2. Run the following command on all hosts:
      Operating System Command
      RHEL $ sudo yum remove cloudera-manager-daemons cloudera-manager-agent
      SLES $ sudo zypper remove cloudera-manager-daemons cloudera-manager-agent
      Ubuntu or Debian $ sudo apt-get purge cloudera-manager-daemons cloudera-manager-agent
    3. Restore the Cloudera Manager repository files that you previously backed up to the Cloudera repository directory.
    4. Run the following commands on all hosts:
      Operating System Command
      RHEL
      $ sudo yum clean all
      $ sudo yum install cloudera-manager-agent
      SLES
      $ sudo zypper refresh -s
      $ sudo zypper install cloudera-manager-agent
      Ubuntu or Debian
      $ sudo apt-get update
      $ sudo apt-get install cloudera-manager-agent
    5. Run the following commands on the Cloudera Manager server host:
      Operating System Command
      RHEL
      $ sudo yum install cloudera-manager-server
      SLES
      $ sudo zypper install cloudera-manager-server 
      Ubuntu or Debian
      $ sudo apt-get install cloudera-manager-server
  7. Restore the following Cloudera Manager databases:
    • Cloudera Manager Server
    • Activity Monitor (depending on your deployment, this role may not be installed)
    • Reports Manager
    • Service Monitor
    • Host Monitor
    • Navigator Audit Server
    • Navigator Metadata Server

    See the documentation for your database for details.

      Important: Restore the databases to their pre-upgrade state. If the Cloudera Manager databases are restored in a way that leaves tables that were created during the upgrade, there could be problems if you attempt to upgrade Cloudera Manager again after the rollback.
  8. On the Cloudera Manager Server, restore the following from your backup to the same location:
    1. The /etc/cloudera-scm-server/db.properties file
    2. The contents of the /var/lib/cloudera-scm-eventserver directory
  9. On each host in the cluster, restore the /etc/cloudera-scm-agent/config.ini file from your backup.
  10. Start Cloudera Manager by running the following command on the Cloudera Manager Server host:
    sudo service cloudera-scm-server start
  11. Start the Cloudera Manager Agents by running the following command on all hosts in your clusters:
    sudo service cloudera-scm-agent start
  12. Log in to the Cloudera Manager Admin console and start the Cloudera Management Service:
    1. Select Clusters > Cloudera Management Service.
    2. Select Actions > Start.
  13. Start your cluster:
    1. Select Clusters > Cluster Name.
    2. Select Actions > Start.

Restoring Databases

Several steps in the rollback procedures require you to restore previously backed-up databases. The steps for backing up and restoring databases differ depending on the database vendor and version you select for your cluster and are beyond the scope of this document.

  Important: Restore the databases to their exact state as of when you took the backup. Do not merge in any changes that may have occurred during the subsequent upgrade.
Page generated July 8, 2016.