Open christianfelicite opened 4 years ago
https://fr.m.wikipedia.org/wiki/Apache_Hive
Apache Hive prend en charge l'analyse des grands ensembles de données stockées dans Hadoop HDFS ou des systèmes de fichiers compatibles tels que Amazon S3.
Il fournit un langage similaire à SQL appelée HiveQL avec le schéma lors de la lecture et de manière transparente convertit les requêtes en map/reduce, Apache Tez et jobs Spark.
Tous les trois moteurs d'exécution peuvent fonctionner sur Hadoop YARN. Pour accélérer les requêtes, il fournit des index, y compris bitmap indexes.
Par défaut, Hive stocke les métadonnées dans une base de données embarquée Apache Derby, et d'autres bases de données client / serveur comme MySQL peuvent éventuellement être utilisées.
Actuellement, il y a quatre formats de fichiers pris en charge par Hive: TEXTFILE, SEQUENCEFILE, ORC et RCFile. Les fichiers Apache Parquet peuvent être lus via plugin dans les versions ultérieures à 0.10 et nativement à partir de 0.13,.
Autres caractéristiques de Hive :
Hive est composé des éléments suivants :
Bien que basé sur SQL, HiveQL ne suit pas à la lettre la norme SQL. HiveQL fournit des extensions hors SQL, par exemple des insertions multi-tables, et seulement une offre basique pour l'indexation. Aussi, HiveQL manque de support pour les transactions et les vues matérialisées, et seul soutien de sous-requête limitée,. Le support pour insert, update et delete avec la fonctionnalité complète d'ACID a été mis à disposition avec la sortie de la version 0.14.
En interne, le compilateur traduit les instructions HiveQL en graphe orienté acyclique de MapReduce ou Tez, ou job Spark, qui sont ensuite soumis à Hadoop pour exécution.
https://www.analyticsvidhya.com/blog/2020/10/apache-hive-table-types/
What is Apache Hive? Apache Hive is a data warehouse system for Apache Hadoop. It provides SQL-like access for data in HDFS so that Hadoop can be used as a warehouse structure. Hive allows you to provide structure on largely unstructured data. After you define the structure, you can use Hive to query the data without knowledge of Java or Map Reduce.
The Hive Query Language (HQL) has similar semantics and functions as standard SQL in the relational database so that experienced database analysts can easily access the data.
What are the features provided by Hive? Apache Hive provides the following features:
https://www.tutorialspoint.com/hive/hive_introduction.htm
Hive is a data warehouse infrastructure tool to process structured data in Hadoop. It resides on top of Hadoop to summarize Big Data, and makes querying and analyzing easy.
Initially Hive was developed by Facebook, later the Apache Software Foundation took it up and developed it further as an open source under the name Apache Hive. It is used by different companies. For example, Amazon uses it in Amazon Elastic MapReduce.
Hive is not A relational database A design for OnLine Transaction Processing (OLTP) A language for real-time queries and row-level updates
Features of Hive It stores schema in a database and processed data into HDFS. It is designed for OLAP. It provides SQL type language for querying called HiveQL or HQL. It is familiar, fast, scalable, and extensible.
The following component diagram depicts the architecture of Hive:
Unit Name | Operation |
---|---|
User Interface | Hive is a data warehouse infrastructure software that can create interaction between user and HDFS. The user interfaces that Hive supports are Hive Web UI, Hive command line, and Hive HD Insight (In Windows server). |
Meta Store | Hive chooses respective database servers to store the schema or Metadata of tables, databases, columns in a table, their data types, and HDFS mapping. |
HiveQL Process Engine | HiveQL is similar to SQL for querying on schema info on the Metastore. It is one of the replacements of traditional approach for MapReduce program. Instead of writing MapReduce program in Java, we can write a query for MapReduce job and process it. |
Execution Engine | The conjunction part of HiveQL process Engine and MapReduce is Hive Execution Engine. Execution engine processes the query and generates results as same as MapReduce results. It uses the flavor of MapReduce. |
HDFS or HBASE | Hadoop distributed file system or HBASE are the data storage techniques to store data into file system. |
Working of Hive The following diagram depicts the workflow between Hive and Hadoop.
The following table defines how Hive interacts with Hadoop framework:
Step No. | Operation |
---|---|
1 | Execute Query The Hive interface such as Command Line or Web UI sends query to Driver (any database driver such as JDBC, ODBC, etc.) to execute. |
2 | Get Plan The driver takes the help of query compiler that parses the query to check the syntax and query plan or the requirement of query. |
3 | Get Metadata The compiler sends metadata request to Metastore (any database). |
4 | Send Metadata Metastore sends metadata as a response to the compiler. |
5 | Send Plan The compiler checks the requirement and resends the plan to the driver. Up to here, the parsing and compiling of a query is complete. |
6 | Execute Plan The driver sends the execute plan to the execution engine. |
7 | Execute Job Internally, the process of execution job is a MapReduce job. The execution engine sends the job to JobTracker, which is in Name node and it assigns this job to TaskTracker, which is in Data node. Here, the query executes MapReduce job. |
7.1 | Metadata Ops Meanwhile in execution, the execution engine can execute metadata operations with Metastore. |
8 | Fetch Result The execution engine receives the results from Data nodes. |
9 | Send Results The execution engine sends those resultant values to the driver. |
10 | Send Results The driver sends the results to Hive Interfaces. |
X