This is the documentation for Cloudera Enterprise 5.8.x. Documentation for other versions is available at Cloudera Documentation.

Managing YARN

Adding the YARN Service

Minimum Required Role: Cluster Administrator (also provided by Full Administrator)

  1. On the Home > Status tab, click to the right of the cluster name and select Add a Service. A list of service types display. You can add one type of service at a time.
  2. Select YARN (MR2 Included) and click Continue.
  3. Select the services on which the new service should depend. All services must depend on the same ZooKeeper service. Click Continue.
  4. Customize the assignment of role instances to hosts. The wizard evaluates the hardware configurations of the hosts to determine the best hosts for each role. The wizard assigns all worker roles to the same set of hosts to which the HDFS DataNode role is assigned. You can reassign role instances if necessary.

    Click a field below a role to display a dialog box containing a list of hosts. If you click a field containing multiple hosts, you can also select All Hosts to assign the role to all hosts, or Custom to display the pageable hosts dialog box.

    The following shortcuts for specifying hostname patterns are supported:
    • Range of hostnames (without the domain portion)
      Range Definition Matching Hosts
      10.1.1.[1-4] 10.1.1.1, 10.1.1.2, 10.1.1.3, 10.1.1.4
      host[1-3].company.com host1.company.com, host2.company.com, host3.company.com
      host[07-10].company.com host07.company.com, host08.company.com, host09.company.com, host10.company.com
    • IP addresses
    • Rack name

    Click the View By Host button for an overview of the role assignment by hostname ranges.

Configuring Memory Settings for YARN and MRv2

The memory configuration for YARN and MRv2 memory is important to get the best performance from your cluster. Several different settings are involved. The table below shows the default settings, as well as the settings that Cloudera recommends, for each configuration option. See Managing YARN (MRv2) and MapReduce (MRv1) for more configuration specifics; and, for detailed tuning advice with sample configurations, see Tuning YARN.
Table 1. YARN and MRv2 Memory Configuration
Cloudera Manager Property Name CDH Property Name Default Configuration Cloudera Tuning Guidelines
Container Memory Minimum
yarn.scheduler.
minimum-allocation-mb
1 GB 0
Container Memory Maximum
yarn.scheduler.
maximum-allocation-mb
64 GB amount of memory on largest host
Container Memory Increment
yarn.scheduler.
increment-allocation-mb
512 MB Use a fairly large value, such as 128 MB
Container Memory
yarn.nodemanager.
resource.memory-mb
8 GB 8 GB
Map Task Memory mapreduce.map.memory.mb 1 GB 1 GB
Reduce Task Memory mapreduce.reduce.memory.mb 1 GB 1 GB
Map Task Java Opts Base mapreduce.map.java.opts -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Xmx768m
Reduce Task Java Opts Base mapreduce.reduce.java.opts -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Xmx768m
ApplicationMaster Memory
yarn.app.mapreduce.
am.resource.mb
1 GB 1 GB
ApplicationMaster Java Opts Base
yarn.app.mapreduce.
am.command-opts
-Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Xmx768m
       

Configuring Directories

Minimum Required Role: Cluster Administrator (also provided by Full Administrator)

Creating the Job History Directory

When adding the YARN service, the Add Service wizard automatically creates a job history directory. If you quit the Add Service wizard or it does not finish, you can create the directory outside the wizard:
  1. Go to the YARN service.
  2. Select Actions > Create Job History Dir.
  3. Click Create Job History Dir again to confirm.

Creating the NodeManager Remote Application Log Directory

When adding the YARN service, the Add Service wizard automatically creates a remote application log directory. If you quit the Add Service wizard or it does not finish, you can create the directory outside the wizard:
  1. Go to the YARN service.
  2. Select Actions > Create NodeManager Remote Application Log Directory.
  3. Click Create NodeManager Remote Application Log Directory again to confirm.

Importing MapReduce Configurations to YARN

Minimum Required Role: Cluster Administrator (also provided by Full Administrator)

  Warning: In addition to importing configuration settings, the import process:
  • Configures services to use YARN as the MapReduce computation framework instead of MapReduce.
  • Overwrites existing YARN configuration and role assignments.
When you upgrade from CDH 4 to CDH 5, you can import MapReduce configurations to YARN as part of the upgrade wizard. If you do not import configurations during upgrade, you can manually import the configurations at a later time:
  1. Go to the YARN service page.
  2. Stop the YARN service.
  3. Select Actions > Import MapReduce Configuration. The import wizard presents a warning letting you know that it will import your configuration, restart the YARN service and its dependent services, and update the client configuration.
  4. Click Continue to proceed. The next page indicates some additional configuration required by YARN.
  5. Verify or modify the configurations and click Continue. The Switch Cluster to MR2 step proceeds.
  6. When all steps have been completed, click Finish.
  7. (Optional) Remove the MapReduce service.
    1. Click the Cloudera Manager logo to return to the Home page.
    2. In the MapReduce row, right-click and select Delete. Click Delete to confirm.
  8. Recompile JARs used in MapReduce applications. For further information, see For MapReduce Programmers: Writing and Running Jobs.

Configuring the YARN Scheduler

Minimum Required Role: Configurator (also provided by Cluster Administrator, Full Administrator)

The YARN service is configured by default to use the Fair Scheduler. You can change the scheduler type to FIFO or Capacity Scheduler. You can also modify the Fair Scheduler and Capacity Scheduler configuration. For further information on schedulers, see YARN (MRv2) and MapReduce (MRv1) Schedulers.

Configuring the Scheduler Type

  1. Go to the YARN service.
  2. Click the Configuration tab.
  3. Select Scope > ResourceManager.
  4. Select Category > Main.
  5. Select a scheduler class.

    If more than one role group applies to this configuration, edit the value for the appropriate role group. See Modifying Configuration Properties Using Cloudera Manager.

  6. Click Save Changes to commit the changes.
  7. Restart the YARN service.

Modifying the Scheduler Configuration

  1. Go to the YARN service.
  2. Click the Configuration tab.
  3. Click the ResourceManager Default Group category.
  4. Select Scope > ResourceManager.
  5. Type Scheduler in the Search box.
  6. Locate a property and modify the configuration.

    If more than one role group applies to this configuration, edit the value for the appropriate role group. See Modifying Configuration Properties Using Cloudera Manager.

  7. Click Save Changes to commit the changes.
  8. Restart the YARN service.

Dynamic Resource Management

In addition to the static resource management available to all services, the YARN service also supports dynamic management of its static allocation. See Dynamic Resource Pools.

Configuring YARN for Long-running Applications

On a secure cluster, long-running applications such as Spark Streaming jobs will need additional configuration since the default settings only allow the hdfs user's delegation tokens a maximum lifetime of 7 days, which is not always sufficient. For instructions on how to work around this issue, see Configuring Spark on YARN for Long-Running Applications.

Task Process Exit Codes

All YARN tasks on the NodeManager are run in a JVM. When a task runs successfully, the exit code is 0. Exit codes of 0 are not logged, as they are the expected result. Any non-zero exit code is logged as an error. The non-zero exit code is reported by the NodeManager as an error in the child process. The NodeManager itself is not affected by the error.

The task JVM might exit with a non-zero code for multiple reasons, though there is no exhaustive list. Exit codes can be split into two categories:
  • Set by the JVM based on the OS signal received by the JVM
  • Directly set in the code

Signal-Related Exit Codes

When the OS sends a signal to the JVM, the JVM handles the signal, which could cause the JVM to exit. Not all signals cause the JVM to exit. Exit codes for OS signals have a value between 128 and 160. Logs show non-zero status codes without further explanation.

Two exit values that typically do not require investigation are 137 and 143. These values are logged when the JVM is killed by the NodeManager or the OS. The NodeManager might kill a JVM due to task preemption (if that is configured) or a speculative run. The OS might kill the JVM when the JVM exceeds system limits like CPU time. You should investigate these codes if they appear frequently, as they might indicate a misconfiguration or a structural problem with regard to resources.

Exit code 154 is used in RecoveredContainerLaunch#call to indicate containers that were lost between NodeManager restarts without an exit code being recorded. This is usually a bug, and requires investigation.

Other Exit Codes

The JVM might exit if there is an unrecoverable error while running a task. The exit code and the message logged should provide more detail. A Java stack trace might also be logged as part of the exit. These exits should be investigated further to discover a root cause.

In the case of a streaming MapReduce job, the exit code of the JVM is the same as the mapper or reducer in use. The mapper or reducer can be a shell script or Python script. This means that the underlying script dictates the exit code: in streaming jobs, you should take this into account during your investigation.

Page generated July 8, 2016.