Upgrading to CDH 5
Use the instructions on this page only to upgrade from CDH 4.
- To upgrade from CDH 4, you must uninstall CDH 4, and then install CDH 5. Make sure you allow sufficient time for this, and do the necessary backup and preparation as described below.
- If you have configured HDFS HA with NFS shared storage, do not proceed. This configuration is not supported on CDH 5; Quorum-based storage is the only supported HDFS HA configuration on CDH 5. Unconfigure your NFS shared storage configuration before you attempt to upgrade.
Use the service command to start, stop, and restart CDH components, rather than running scripts in /etc/init.d directly. The service command creates a predictable environment by setting the current working directory to / and removing most environment variables (passing only LANG and TERM). With /etc/init.d, existing environment variables remain in force and can produce unpredictable results. When you install CDH from packages, service is installed as part of the Linux Standard Base (LSB).
To upgrade to the latest CDH 5 release, perform the following steps.
- Back Up Configuration Data and Stop Services
- Back up the HDFS Metadata
- Uninstall the CDH 4 Version of Hadoop
- Download the Latest Version of CDH 5
- Install CDH 5 with YARN
- Install CDH 5 with MRv1
- Copy the CDH 5 Logging File
- In an HA Deployment, Upgrade and Start the JournalNodes
- Upgrade the HDFS Metadata
- Start YARN or MapReduce MRv1
- Set the Sticky Bit
- Re-Install CDH 5 Components
- Apply Configuration File Changes
- Finalize the HDFS Metadata Upgrade
Back Up Configuration Data and Stop Services
- Put the NameNode into safe mode and save the fsimage:
- Put the NameNode (or active NameNode in an HA configuration) into safe mode:
$ sudo -u hdfs hdfs dfsadmin -safemode enter
- Perform a saveNamespace operation:
$ sudo -u hdfs hdfs dfsadmin -saveNamespace
This will result in a new fsimage being written out with no edit log entries.
- With the NameNode still in safe mode, shut down all services as instructed below.
- Put the NameNode (or active NameNode in an HA configuration) into safe mode:
- For each component you are using, back up configuration data, databases, and other important files.
- Shut down the Hadoop services across your entire cluster:
for x in `cd /etc/init.d ; ls hadoop-*` ; do sudo service $x stop ; done
- Check each host to make sure that there are no processes running as the hdfs or mapred users from root:
# ps -aef | grep java
Back up the HDFS Metadata
Do this step when you are sure that all Hadoop services have been shut down. It is particularly important that the NameNode service is not running so that you can make a consistent backup.
To back up the HDFS metadata on the NameNode machine:
- Cloudera recommends backing up HDFS metadata on a regular basis, as well as before a major upgrade.
- dfs.name.dir is deprecated but still works; dfs.namenode.name.dir is preferred. This example uses dfs.name.dir.
- Find the location of your dfs.name.dir (or dfs.namenode.name.dir); for example:
$ grep -C1 dfs.name.dir /etc/hadoop/conf/hdfs-site.xml
You should see something like this:<property> <name>dfs.name.dir</name> <value>/mnt/hadoop/hdfs/name</value>
- Back up the directory. The path inside the <value> XML element is the path to your HDFS metadata. If you see a comma-separated list of paths, there is no need to back up all of
them; they store the same data. Back up the first directory, for example, by using the following commands:
$ cd /mnt/hadoop/hdfs/name # tar -cvf /root/nn_backup_data.tar . ./ ./current/ ./current/fsimage ./current/fstime ./current/VERSION ./current/edits ./image/ ./image/fsimage
Warning: If you see a file containing the word lock, the NameNode is probably still running. Repeat the preceding steps, starting by shutting down the Hadoop services.
Uninstall the CDH 4 Version of Hadoop
To uninstall Hadoop:
Run this command on each host:
On Red Hat-compatible systems:
$ sudo yum remove bigtop-utils bigtop-jsvc bigtop-tomcat sqoop2-client hue-common solr
On SLES systems:
$ sudo zypper remove bigtop-utils bigtop-jsvc bigtop-tomcat sqoop2-client hue-common solr
On Ubuntu systems:
sudo apt-get remove bigtop-utils bigtop-jsvc bigtop-tomcat sqoop2-client hue-common solr
Remove CDH 4 Repository Files
- Before removing the files, make sure you have not added any custom entries that you want to preserve. (To preserve custom entries, back up the files before removing them.)
- Make sure you remove Impala and Search repository files, as well as the CDH repository file.
Download the Latest Version of CDH 5
For instructions on how to add a CDH 5 yum repository or build your own CDH 5 yum repository, see Installing the Latest CDH 5 Release.
On Red Hat-compatible systems:
- Download the CDH 5 "1-click Install" package (or RPM).
Click the appropriate RPM and Save File to a directory with write access (for example, your home directory).
OS Version Link to CDH 5 RPM RHEL/CentOS/Oracle 5 RHEL/CentOS/Oracle 5 link RHEL/CentOS/Oracle 6 RHEL/CentOS/Oracle 6 link RHEL/CentOS/Oracle 7 RHEL/CentOS/Oracle 7 link - Install the RPM for all RHEL versions:
$ sudo yum --nogpgcheck localinstall cloudera-cdh-5-0.x86_64.rpm
sudo yum clean all
Now (optionally) add a repository key:
- For Red Hat/CentOS/Oracle 5 systems:
$ sudo rpm --import https://archive.cloudera.com/cdh5/redhat/5/x86_64/cdh/RPM-GPG-KEY-cloudera
- For Red Hat/CentOS/Oracle 6 systems:
$ sudo rpm --import https://archive.cloudera.com/cdh5/redhat/6/x86_64/cdh/RPM-GPG-KEY-cloudera
On SLES systems:
- Download the CDH 5 "1-click Install" package.
Download the rpm file, choose Save File, and save it to a directory to which you have write access (for example, your home directory).
- Install the RPM:
$ sudo rpm -i cloudera-cdh-5-0.x86_64.rpm
- Update your system package index by running:
$ sudo zypper refresh
sudo zypper clean --all
Now (optionally) add a repository key:
$ sudo rpm --import https://archive.cloudera.com/cdh5/sles/11/x86_64/cdh/RPM-GPG-KEY-cloudera
On Ubuntu and Debian systems:
- Download the CDH 5 "1-click Install" package:
OS Version Package Link Jessie Jessie package Wheezy Wheezy package Precise Precise package Trusty Trusty package - Install the package by doing one of the following:
- Choose Open with in the download window to use the package manager.
- Choose Save File, save the package to a directory to which you have write access (for example, your home directory), and install it from the command line.
For example:
sudo dpkg -i cdh5-repository_1.0_all.deb
sudo apt-get update
Now (optionally) add a repository key:
- For Ubuntu Precise systems:
$ curl -s https://archive.cloudera.com/cdh5/ubuntu/precise/amd64/cdh/archive.key | sudo apt-key add -
- For Debian Wheezy systems:
$ curl -s https://archive.cloudera.com/cdh5/debian/wheezy/amd64/cdh/archive.key | sudo apt-key add -
Install CDH 5 with YARN
- Install and deploy ZooKeeper.
Important: Cloudera recommends that you install (or update) and start a ZooKeeper cluster before proceeding. This is a requirement if you are deploying high availability (HA) for the NameNode or JobTracker.
Follow instructions under ZooKeeper Installation.
- Install each type of daemon package on the appropriate systems(s), as follows.
Where to install Install commands Resource Manager host (analogous to MRv1 JobTracker) running:
Red Hat/CentOS compatible
sudo yum clean all; sudo yum install hadoop-yarn-resourcemanager
SLES
sudo zypper clean --all; sudo zypper install hadoop-yarn-resourcemanager
Ubuntu or Debian
sudo apt-get update; sudo apt-get install hadoop-yarn-resourcemanager
NameNode host(s) running:
Red Hat/CentOS compatible
sudo yum clean all; sudo yum install hadoop-hdfs-namenode
SLES
sudo zypper clean --all; sudo zypper install hadoop-hdfs-namenode
Ubuntu or Debian
sudo apt-get update; sudo apt-get install hadoop-hdfs-namenode
Secondary NameNode host (if used) running:
Red Hat/CentOS compatible
sudo yum clean all; sudo yum install hadoop-hdfs-secondarynamenode
SLES
sudo zypper clean --all; sudo zypper install hadoop-hdfs-secondarynamenode
Ubuntu or Debian
sudo apt-get update; sudo apt-get install hadoop-hdfs-secondarynamenode
All cluster hosts except the Resource Manager running:
Red Hat/CentOS compatible
sudo yum clean all; sudo yum install hadoop-yarn-nodemanager hadoop-hdfs-datanode hadoop-mapreduce
SLES
sudo zypper clean --all; sudo zypper clean --all; sudo zypper install hadoop-yarn-nodemanager hadoop-hdfs-datanode hadoop-mapreduce
Ubuntu or Debian
sudo apt-get update; sudo apt-get install hadoop-yarn-nodemanager hadoop-hdfs-datanode hadoop-mapreduce
One host in the cluster running:
Red Hat/CentOS compatible
sudo yum clean all; sudo yum install hadoop-mapreduce-historyserver hadoop-yarn-proxyserver
SLES
sudo zypper clean --all; sudo zypper install hadoop-mapreduce-historyserver hadoop-yarn-proxyserver
Ubuntu or Debian
sudo apt-get update; sudo apt-get install hadoop-mapreduce-historyserver hadoop-yarn-proxyserver
All client hosts, running:
Red Hat/CentOS compatible
sudo yum clean all; sudo yum install hadoop-client
SLES
sudo zypper clean --all; sudo zypper install hadoop-client
Ubuntu or Debian
sudo apt-get update; sudo apt-get install hadoop-client
Note: The hadoop-yarn and hadoop-hdfs packages are installed on each system automatically as dependencies of the other packages.
Install CDH 5 with MRv1
Skip this step if you intend to use only YARN. If you are installing both YARN and MRv1, you can skip any packages you have already installed in Step 6a.
To install CDH 5 with MRv1:
- Install and deploy ZooKeeper.
Important: Cloudera recommends that you install (or update) and start a ZooKeeper cluster before proceeding. This is a requirement if you are deploying high availability (HA) for the NameNode or JobTracker.
Follow instructions under ZooKeeper Installation.
- Install each type of daemon package on the appropriate systems(s), as follows.
Where to install
Install commands
JobTracker host running:
Red Hat/CentOS compatible
sudo yum clean all; sudo yum install hadoop-0.20-mapreduce-jobtracker
SLES
sudo zypper clean --all; sudo zypper install hadoop-0.20-mapreduce-jobtracker
Ubuntu or Debian
sudo apt-get update; sudo apt-get install hadoop-0.20-mapreduce-jobtracker
NameNode host(s) running:
Red Hat/CentOS compatible
sudo yum clean all; sudo yum install hadoop-hdfs-namenode
SLES
sudo zypper clean --all; sudo zypper install hadoop-hdfs-namenode
Ubuntu or Debian
sudo apt-get update; sudo apt-get install hadoop-hdfs-namenode
Secondary NameNode host (if used) running:
Red Hat/CentOS compatible
sudo yum clean all; sudo yum install hadoop-hdfs-secondarynamenode
SLES
sudo zypper clean --all; sudo zypper install hadoop-hdfs-secondarynamenode
Ubuntu or Debian
sudo apt-get update; sudo apt-get install hadoop-hdfs-secondarynamenode
All cluster hosts except the JobTracker, NameNode, and Secondary (or Standby) NameNode hosts, running:
Red Hat/CentOS compatible
sudo yum clean all; sudo yum install hadoop-0.20-mapreduce-tasktracker hadoop-hdfs-datanode
SLES
sudo zypper clean --all; sudo zypper install hadoop-0.20-mapreduce-tasktracker hadoop-hdfs-datanode
Ubuntu or Debian
sudo apt-get update; sudo apt-get install hadoop-0.20-mapreduce-tasktracker hadoop-hdfs-datanode
All client hosts, running:
Red Hat/CentOS compatible
sudo yum clean all; sudo yum install hadoop-client
SLES
sudo zypper clean --all; sudo zypper install hadoop-client
Ubuntu or Debian
sudo apt-get update; sudo apt-get install hadoop-client
Copy the CDH 5 Logging File
Copy over the log4j.properties file to your custom directory on each host in the cluster; for example:
$ cp /etc/hadoop/conf.empty/log4j.properties /etc/hadoop/conf.my_cluster/log4j.properties
In an HA Deployment, Upgrade and Start the JournalNodes
- Install the JournalNode daemons on each of the machines where they will run.
To install JournalNode on Red Hat-compatible systems:
$ sudo yum install hadoop-hdfs-journalnode
To install JournalNode on Ubuntu and Debian systems:
$ sudo apt-get install hadoop-hdfs-journalnode
To install JournalNode on SLES systems:
$ sudo zypper install hadoop-hdfs-journalnode
- Start the JournalNode daemons on each of the machines where they will run:
sudo service hadoop-hdfs-journalnode start
Wait for the daemons to start before proceeding to the next step.
Upgrade the HDFS Metadata
- For an HA deployment, do sub-steps 1, 2, and 3 below.
- For a non-HA deployment, do sub-steps 1, 3, and 4 below.
- To upgrade the HDFS metadata, run the following command on the NameNode. If HA is enabled, do this on the active NameNode only, and make sure the JournalNodes
have been upgraded to CDH 5 and are up and running before you run the command.
$ sudo service hadoop-hdfs-namenode upgrade
Important: In an HDFS HA deployment, it is critically important that you do this on only one NameNode.You can watch the progress of the upgrade by running:
$ sudo tail -f /var/log/hadoop-hdfs/hadoop-hdfs-namenode-<hostname>.log
Look for a line that confirms the upgrade is complete, such as: /var/lib/hadoop-hdfs/cache/hadoop/dfs/<name> is completeNote: The NameNode upgrade process can take a while depending on how many files you have. - Do this step only in an HA configuration. Otherwise skip to starting up the DataNodes.
Wait for NameNode to exit safe mode, and then re-start the standby NameNode.
- If Kerberos is enabled:
$ kinit -kt /path/to/hdfs.keytab hdfs/<fully.qualified.domain.name@YOUR-REALM.COM> && hdfs namenode -bootstrapStandby
$ sudo service hadoop-hdfs-namenode start
- If Kerberos is not enabled:
$ sudo -u hdfs hdfs namenode -bootstrapStandby $ sudo service hadoop-hdfs-namenode start
For more information about the haadmin -failover command, see Administering an HDFS High Availability Cluster.
- If Kerberos is enabled:
- Start up the DataNodes:
On each DataNode:
$ sudo service hadoop-hdfs-datanode start
- Do this step only in a non-HA configuration. Otherwise skip to starting YARN or MRv1.
Wait for NameNode to exit safe mode, and then start the Secondary NameNode.
- To check that the NameNode has exited safe mode, look for messages in the log file, or the NameNode's web interface, that say "...no longer in safe mode."
- To start the Secondary NameNode (if used), enter the following command on the Secondary NameNode host:
$ sudo service hadoop-hdfs-secondarynamenode start
- To complete the cluster upgrade, follow the remaining steps below.
Start YARN or MapReduce MRv1
You are now ready to start and test MRv1 or YARN.
For YARN |
or For MRv1 |
---|---|
Start MapReduce with YARN
Create a history directory and set permissions; for example:
sudo -u hdfs hadoop fs -mkdir /user/history sudo -u hdfs hadoop fs -chmod -R 1777 /user/history sudo -u hdfs hadoop fs -chown yarn /user/history
Create the /var/log/hadoop-yarn directory and set ownership:
$ sudo -u hdfs hadoop fs -mkdir /var/log/hadoop-yarn $ sudo -u hdfs hadoop fs -chown yarn:mapred /var/log/hadoop-yarn
Verify the directory structure, ownership, and permissions:
$ sudo -u hdfs hadoop fs -ls -R /
You should see:
drwxrwxrwt - hdfs supergroup 0 2012-04-19 14:31 /tmp drwxr-xr-x - hdfs supergroup 0 2012-05-31 10:26 /user drwxrwxrwt - yarn supergroup 0 2012-04-19 14:31 /user/history drwxr-xr-x - hdfs supergroup 0 2012-05-31 15:31 /var drwxr-xr-x - hdfs supergroup 0 2012-05-31 15:31 /var/log drwxr-xr-x - yarn mapred 0 2012-05-31 15:31 /var/log/hadoop-yarn
To start YARN, start the ResourceManager and NodeManager services:
On the ResourceManager system:
$ sudo service hadoop-yarn-resourcemanager start
On each NodeManager system (typically the same ones where DataNode service runs):
$ sudo service hadoop-yarn-nodemanager start
To start the MapReduce JobHistory Server
On the MapReduce JobHistory Server system:
$ sudo service hadoop-mapreduce-historyserver start
For each user who will be submitting MapReduce jobs using MapReduce v2 (YARN), or running Pig, Hive, or Sqoop in a YARN installation, make sure that the HADOOP_MAPRED_HOME environment variable is set correctly as follows:
$ export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
Verify basic cluster operation for YARN.
At this point your cluster is upgraded and ready to run jobs. Before running your production jobs, verify basic cluster operation by running an example from the Apache Hadoop web site.
For important configuration information, see Deploying MapReduce v2 (YARN) on a Cluster.
- Create a home directory on HDFS for the user who will be running the job (for example, joe):
$ sudo -u hdfs hadoop fs -mkdir /user/joe $ sudo -u hdfs hadoop fs -chown joe /user/joe
Do the following steps as the user joe.
- Make a directory in HDFS called input and copy some XML files into it by running the following commands in pseudo-distributed mode:
$ hadoop fs -mkdir input $ hadoop fs -put /etc/hadoop/conf/*.xml input $ hadoop fs -ls input Found 3 items: -rw-r--r-- 1 joe supergroup 1348 2012-02-13 12:21 input/core-site.xml -rw-r--r-- 1 joe supergroup 1913 2012-02-13 12:21 input/hdfs-site.xml -rw-r--r-- 1 joe supergroup 1001 2012-02-13 12:21 input/mapred-site.xml
- Set HADOOP_MAPRED_HOME for user joe:
$ export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
- Run an example Hadoop job to grep with a regular expression in your input data.
$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar grep input output23 'dfs[a-z.]+'
- After the job completes, you can find the output in the HDFS directory named output23 because you specified that output directory to Hadoop.
$ hadoop fs -ls Found 2 items drwxr-xr-x - joe supergroup 0 2009-08-18 18:36 /user/joe/input drwxr-xr-x - joe supergroup 0 2009-08-18 18:38 /user/joe/output23
You can see that there is a new directory called output23.
- List the output files.
$ hadoop fs -ls output23 Found 2 items drwxr-xr-x - joe supergroup 0 2009-02-25 10:33 /user/joe/output23/_SUCCESS -rw-r--r-- 1 joe supergroup 1068 2009-02-25 10:33 /user/joe/output23/part-r-00000
- Read the results in the output file.
$ hadoop fs -cat output23/part-r-00000 | head 1 dfs.safemode.min.datanodes 1 dfs.safemode.extension 1 dfs.replication 1 dfs.permissions.enabled 1 dfs.namenode.name.dir 1 dfs.namenode.checkpoint.dir 1 dfs.datanode.data.dir
You have now confirmed your cluster is successfully running CDH 5.
Start MapReduce (MRv1)
After you have verified HDFS is operating correctly, you are ready to start MapReduce. On each TaskTracker system:
$ sudo service hadoop-0.20-mapreduce-tasktracker start
On the JobTracker system:
$ sudo service hadoop-0.20-mapreduce-jobtracker start
Verify that the JobTracker and TaskTracker started properly.
$ sudo jps | grep Tracker
If the permissions of directories are not configured correctly, the JobTracker and TaskTracker processes start and immediately fail. If this happens, check the JobTracker and TaskTracker logs and set the permissions correctly.
$ export HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
Verify basic cluster operation for MRv1.
At this point your cluster is upgraded and ready to run jobs. Before running your production jobs, verify basic cluster operation by running an example from the Apache Hadoop web site.
- Create a home directory on HDFS for the user who will be running the job (for example, joe):
$ sudo -u hdfs hadoop fs -mkdir /user/joe $ sudo -u hdfs hadoop fs -chown joe /user/joe
Do the following steps as the user joe.
- Make a directory in HDFS called input and copy some XML files into it by running the following commands:
$ hadoop fs -mkdir input $ hadoop fs -put /etc/hadoop/conf/*.xml input $ hadoop fs -ls input Found 3 items: -rw-r--r-- 1 joe supergroup 1348 2012-02-13 12:21 input/core-site.xml -rw-r--r-- 1 joe supergroup 1913 2012-02-13 12:21 input/hdfs-site.xml -rw-r--r-- 1 joe supergroup 1001 2012-02-13 12:21 input/mapred-site.xml
- Set HADOOP_MAPRED_HOME for user joe:
$ export HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce/
- Run an example Hadoop job to grep with a regular expression in your input data.
$ /usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar grep input output 'dfs[a-z.]+'
- After the job completes, you can find the output in the HDFS directory named output because you specified that output directory to Hadoop.
$ hadoop fs -ls Found 2 items drwxr-xr-x - joe supergroup 0 2009-08-18 18:36 /user/joe/input drwxr-xr-x - joe supergroup 0 2009-08-18 18:38 /user/joe/output
You can see that there is a new directory called output.
- List the output files.
$ hadoop fs -ls output Found 2 items drwxr-xr-x - joe supergroup 0 2009-02-25 10:33 /user/joe/output/_logs -rw-r--r-- 1 joe supergroup 1068 2009-02-25 10:33 /user/joe/output/part-00000 -rw-r--r- 1 joe supergroup 0 2009-02-25 10:33 /user/joe/output/_SUCCESS
- Read the results in the output file; for example:
$ hadoop fs -cat output/part-00000 | head 1 dfs.datanode.data.dir 1 dfs.namenode.checkpoint.dir 1 dfs.namenode.name.dir 1 dfs.replication 1 dfs.safemode.extension 1 dfs.safemode.min.datanodes
You have now confirmed your cluster is successfully running CDH 5.
Important:If you have client hosts, make sure you also update them to CDH 5, and upgrade the components running on those clients as well.
Set the Sticky Bit
For security reasons Cloudera strongly recommends you set the sticky bit on directories if you have not already done so.
The sticky bit prevents anyone except the superuser, directory owner, or file owner from deleting or moving the files within a directory. (Setting the sticky bit for a file has no effect.) Do this for directories such as /tmp. (For instructions on creating /tmp and setting its permissions, see these instructions).
Re-Install CDH 5 Components
When upgrading CDH, Cloudera strongly recommends that you upgrade your client jars to match. For help on finding matching artifacts, refer to Using the CDH 5 Maven Repository.
CDH 5 Components
- Crunch Installation
- Flume Installation
- HBase Installation
- HCatalog Installation
- Hive Installation
- HttpFS Installation
- Hue Installation
- Impala Installation
- KMS Installation and Upgrade
- Mahout Installation
- Oozie Installation
- Pig Installation
- Search Installation
- Sentry Installation
- Snappy Installation
- Spark Installation
- Sqoop 1 Installation
- Sqoop 2 Installation
- Whirr Installation
- ZooKeeper Installation
Apply Configuration File Changes
During uninstall, the package manager renames any configuration files you have modified from <file> to <file>.rpmsave. During re-install, the package manager creates a new <file> with applicable defaults. You are responsible for applying any changes captured in the original CDH 4 configuration file to the new CDH 5 configuration file. In the case of Ubuntu and Debian upgrades, a file will not be installed if there is already a version of that file on the system, and you will be prompted to resolve conflicts; for details, see Automatic handling of configuration files by dpkg.
For example, if you have modified your CDH 4 zoo.cfg configuration file (/etc/zookeeper.dist/zoo.cfg), RPM uninstall and re-install (using yum remove) renames and preserves a copy of your modified zoo.cfg as /etc/zookeeper.dist/zoo.cfg.rpmsave. You should compare this to the new /etc/zookeeper/conf/zoo.cfg and resolve any differences that should be carried forward (typically where you have changed property value defaults). Do this for each component you upgrade to CDH 5.
Finalize the HDFS Metadata Upgrade
To finalize the HDFS metadata upgrade you began earlier in this procedure, proceed as follows:
- Make sure you are satisfied that the CDH 5 upgrade has succeeded and everything is running smoothly. To determine when finalization is warranted, run important workloads and ensure
they are successful.
Warning: Do not proceed until you are sure you are satisfied with the new deployment. Once you have finalized the HDFS metadata, you cannot revert to an earlier version of HDFS.Note:
- If you need to restart the NameNode during this period (after having begun the upgrade process, but before you've run finalizeUpgrade), restart your NameNode without the -upgrade option.
- Verifying that you are ready to finalize the upgrade can take a
long time. Make sure you have enough free disk space, keeping in mind the following:
- Deleting files does not free up disk space.
- Using the balancer causes all moved replicas to be duplicated.
- All on-disk data representing the NameNodes metadata is retained, which could more than double the amount of space required on the NameNode and JournalNode disks.
- Finalize the HDFS metadata upgrade: use one of the following commands, depending on whether Kerberos is enabled (see Configuring Hadoop Security in CDH 5).
Important: In an HDFS HA deployment, make sure that both the NameNodes and all of the JournalNodes are up and functioning normally before you proceed.
- If Kerberos is enabled:
$ kinit -kt /path/to/hdfs.keytab hdfs/<fully.qualified.domain.name@YOUR-REALM.COM> && hdfs dfsadmin -finalizeUpgrade
- If Kerberos is not enabled:
$ sudo -u hdfs hdfs dfsadmin -finalizeUpgrade
Note: After the metadata upgrade completes, the previous/ and blocksBeingWritten/ directories in the DataNodes' data directories aren't cleared until the DataNodes are restarted. - If Kerberos is enabled:
<< Before You Begin Upgrading to CDH 5 Using the Command Line | ©2016 Cloudera, Inc. All rights reserved | Upgrading from an Earlier CDH 5 Release to the Latest Release >> |
Terms and Conditions Privacy Policy |