This is the documentation for Cloudera Enterprise 5.8.x. Documentation for other versions is available at Cloudera Documentation.

Configuring Kerberos Authentication for Hue

To configure the Hue server to support Hadoop security using Kerberos:

  1. Create a Hue user principal in the same realm as the Hadoop cluster of the form:
    kadmin: addprinc -randkey hue/hue.server.fully.qualified.domain.name@YOUR-REALM.COM
    where: hue is the principal the Hue server is running as, hue.server.fully.qualified.domain.name is the fully qualified domain name (FQDN) of your Hue server, YOUR-REALM.COM is the name of the Kerberos realm your Hadoop cluster is in
  2. Create a keytab file for the Hue principal using the same procedure that you used to create the keytab for the hdfs or mapred principal for a specific host. You should name this file hue.keytab and put this keytab file in the directory /etc/hue on the machine running the Hue server. Like all keytab files, this file should have the most limited set of permissions possible. It should be owned by the user running the hue server (usually hue) and should have the permission 400.
  3. To test that the keytab file was created properly, try to obtain Kerberos credentials as the Hue principal using only the keytab file. Substitute your FQDN and realm in the following command:
    $ kinit -k -t /etc/hue/hue.keytab hue/hue.server.fully.qualified.domain.name@YOUR-REALM.COM
  4. In the /etc/hue/hue.ini configuration file, add the following lines in the sections shown. Replace the kinit_path value, /usr/kerberos/bin/kinit, shown below with the correct path on the user's system.
    [desktop]
    
     [[kerberos]]
     # Path to Hue's Kerberos keytab file
     hue_keytab=/etc/hue/hue.keytab
     # Kerberos principal name for Hue
     hue_principal=hue/FQDN@REALM
     # add kinit path for non root users
     kinit_path=/usr/kerberos/bin/kinit
    
    [beeswax]
     # If Kerberos security is enabled, use fully qualified domain name (FQDN)
       ## hive_server_host=<FQDN of Hive Server>
     # Hive configuration directory, where hive-site.xml is located  
       ## hive_conf_dir=/etc/hive/conf
    
    [impala]
       ## server_host=localhost
     # The following property is required when impalad and Hue
     # are not running on the same host 
       ## impala_principal=impala/impalad.hostname.domainname.com 
    
    [search]
     # URL of the Solr Server  
       ## solr_url=http://localhost:8983/solr/  
     # Requires FQDN in solr_url if enabled  
       ## security_enabled=false
     
    [hadoop]
    
     [[hdfs_clusters]]
    
     [[[default]]]
     # Enter the host and port on which you are running the Hadoop NameNode
     namenode_host=FQDN
     hdfs_port=8020
     http_port=50070
     security_enabled=true
      
     # Thrift plugin port for the name node
       ## thrift_port=10090
    
     # Configuration for YARN (MR2)
     # ------------------------------------------------------------------------
     [[yarn_clusters]]
    
     [[[default]]]
     # Enter the host on which you are running the ResourceManager
       ## resourcemanager_host=localhost
     # Change this if your YARN cluster is Kerberos-secured      
       ## security_enabled=false
    
     # Thrift plug-in port for the JobTracker
       ## thrift_port=9290
    
    [liboozie]
     # The URL where the Oozie service runs on. This is required in order for users to submit jobs.  
       ## oozie_url=http://localhost:11000/oozie  
     # Requires FQDN in oozie_url if enabled  
       ## security_enabled=false
      Important:

    In the /etc/hue/hue.ini file, verify the following:

    — Make sure the jobtracker_host property is set to the fully qualified domain name of the host running the JobTracker. The JobTracker hostname must be fully qualified in a secured environment.

    — Make sure the fs.defaultfs property under each [[hdfs_clusters]] section contains the fully-qualified domain name of the file system access point, which is typically the NameNode.

    — Make sure the hive_conf_dir property under the [beeswax] section points to a directory containing a valid hive-site.xml (either the original or a synced copy).

    — Make sure the FQDN specified for HiveServer2 is the same as the FQDN specified for the hue_principal configuration property. Without this, HiveServer2 will not work with security enabled.

    Also note that HiveServer2 currently does not support SSL when using Kerberos.

  5. In the /etc/hadoop/conf/core-site.xml configuration file on all of your cluster nodes, add the following lines:
    <!-- Hue security configuration -->
    <property>
      <name>hue.kerberos.principal.shortname</name>
      <value>hue</value>
    </property>
    <property>
      <name>hadoop.proxyuser.hue.groups</name>
      <value>*</value> <!-- A group which all users of Hue belong to, or the wildcard value "*" -->
    </property>
    <property>
      <name>hadoop.proxyuser.hue.hosts</name>
      <value>hue.server.fully.qualified.domain.name</value>
    </property>
      Important:

    Make sure you change the /etc/hadoop/conf/core-site.xml configuration file on all of your cluster nodes.

  6. If Hue is configured to communicate to Hadoop using HttpFS, then you must add the following properties to httpfs-site.xml:
    <property>
      <name>httpfs.proxyuser.hue.hosts</name>
      <value>fully.qualified.domain.name</value>
    </property>
    <property>
      <name>httpfs.proxyuser.hue.groups</name>
      <value>*</value>
    </property>  
  7. Add the following properties to the Oozie server oozie-site.xml configuration file in the Oozie configuration directory:
    <property>
       <name>oozie.service.ProxyUserService.proxyuser.hue.hosts</name>
       <value>*</value>
    </property>
    <property>
       <name>oozie.service.ProxyUserService.proxyuser.hue.groups</name>
       <value>*</value>
    </property>
  8. Restart the JobTracker to load the changes from the core-site.xml file.
    $ sudo service hadoop-0.20-mapreduce-jobtracker restart
  9. Restart Oozie to load the changes from the oozie-site.xml file.
    $ sudo service oozie restart
  10. Restart the NameNode, JobTracker, and all DataNodes to load the changes from the core-site.xml file.
    $ sudo service hadoop-0.20-(namenode|jobtracker|datanode) restart
Page generated July 8, 2016.