Cloudera Search and Other Cloudera Components
Cloudera Search interacts with other Cloudera components to solve different problems. The following table lists Cloudera components that contribute to the Search process and describes how they interact with Cloudera Search:
Component | Contribution | Applicable To |
---|---|---|
HDFS | Stores source documents. Search indexes source documents to make them searchable. Files that support Cloudera Search, such as Lucene index files and write-ahead logs, are also stored in HDFS. Using HDFS provides simpler provisioning on a larger base, redundancy, and fault tolerance. With HDFS, Cloudera Search servers are essentially stateless, so host failures have minimal consequences. HDFS also provides snapshotting, inter-cluster replication, and disaster recovery. | All cases |
MapReduce | Search includes a pre-built MapReduce-based job. This job can be used for on-demand or scheduled indexing of any supported data set stored in HDFS. This job uses cluster resources for scalable batch indexing. | Many cases |
Flume | Search includes a Flume sink that enables writing events directly to indexers deployed on the cluster, allowing data indexing during ingestion. | Many cases |
Hue | Search includes a Hue front-end search application that uses standard Solr APIs. The application can interact with data indexed in HDFS. The application provides support for the Solr standard query language, visualization of faceted search functionality, and a typical full text search GUI-based. | Many cases |
Morphlines | A morphline is a rich configuration file that defines an ETL transformation chain. Morphlines can consume any kind of data from any data source, process the data, and load the results into Cloudera Search. Morphlines run in a small, embeddable Java runtime system, and can be used for near real-time applications such as the flume agent as well as batch processing applications such as a Spark job. | Many cases |
ZooKeeper | Coordinates distribution of data and metadata, also known as shards. It provides automatic failover to increase service resiliency. | Many cases |
Spark | The CrunchIndexerTool can use Spark to move data from HDFS files into Apache Solr, and run the data through a morphline for extraction and transformation. | Some cases |
HBase | Supports indexing of stored data, extracting columns, column families, and key information as fields. Because HBase does not use secondary indexing, Cloudera Search can complete full-text searches of content in rows and tables in HBase. | Some cases |
Cloudera Manager | Deploys, configures, manages, and monitors Cloudera Search processes and resource utilization across services on the cluster. Cloudera Manager helps simplify Cloudera Search administration, but it is not required. | Some cases |
Cloudera Navigator | Cloudera Navigator provides governance for Hadoop systems including support for auditing Search operations. | Some cases |
Sentry | Sentry enables role-based, fine-grained authorization for Cloudera Search. Sentry can apply a range of restrictions to various tasks, such as accessing data, managing configurations through config objects, or creating collections. Restrictions are consistently applied, regardless of the way users attempt to complete actions. For example, restricting access to data in a collection restricts that access whether queries come from the command line, a browser, Hue, or through the admin console. | Some cases |
Oozie | Automates scheduling and management of indexing jobs. Oozie can check for new data and begin indexing jobs as required. | Some cases |
Impala | Further analyzes search results. | Some cases |
Hive | Further analyzes search results. | Some cases |
Parquet | Provides a columnar storage format, enabling especially rapid result returns for structured workloads such as Impala or Hive. Morphlines provide an efficient pipeline for extracting data from Parquet. | Some cases |
Avro | Includes metadata that Cloudera Search can use for indexing. | Some cases |
Kafka | Search uses this message broker project to increase throughput and decrease latency for handling real-time data. | Some cases |
Sqoop | Ingests data in batch and enables data availability for batch indexing. | Some cases |
Mahout | Applies machine-learning processing to search results. | Some cases |
Page generated July 8, 2016.
<< Understanding Cloudera Search | ©2016 Cloudera, Inc. All rights reserved | Cloudera Search Architecture >> |
Terms and Conditions Privacy Policy |