The playground is a complete Apache Gravitino Docker runtime environment with Hive
, HDFS
, Trino
, MySQL
, PostgreSQL
, Jupyter
, and a Gravitino
server.
Depending on your network and computer, startup time may take 3-5 minutes. Once the playground environment has started, you can open http://localhost:8090 in a browser to access the Gravitino Web UI.
Install Git (optional), Docker, Docker Compose. Docker Desktop (or Orbstack) with Kubernetes enabled and helm CLI is required if you use helm-chart to deploy services.
2 CPU cores, 8 GB RAM, 25 GB disk storage, MacOS or Linux OS (Verified Ubuntu22.04 Ubuntu24.04 AmazonLinux).
The playground runs several services. The TCP ports used may clash with existing services you run, such as MySQL or Postgres.
Docker container | Ports used |
---|---|
playground-gravitino | 8090 9001 |
playground-hive | 3307 19000 19083 60070 |
playground-mysql | 13306 |
playground-postgresql | 15342 |
playground-trino | 18080 |
playground-jupyter | 18888 |
playground-prometheus | 19090 |
playground-grafana | 13000 |
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/apache/gravitino-playground/HEAD/install.sh)"
git clone git@github.com:apache/gravitino-playground.git
cd gravitino-playground
./playground.sh docker start
./playground.sh docker status
./playground.sh docker stop
Enable Kubernetes in Docker Desktop or Orbstack.
In the project root directory, execute this command:
helm upgrade --install gravitino-playground ./helm-chart/ --create-namespace --namespace gravitino-playground --set projectRoot=$(pwd)
./playground.sh k8s start
./playground.sh k8s status
To access the pods or services at localhost
, you need to do these steps:
TRINO_POD=$(kubectl get pods --namespace gravitino-playground -l app=trino -o jsonpath="{.items[0].metadata.name}")
kubectl exec $TRINO_POD -n gravitino-playground -it -- /bin/bash
SPARK_POD=$(kubectl get pods --namespace gravitino-playground -l app=spark -o jsonpath="{.items[0].metadata.name}")
kubectl exec $SPARK_POD -n gravitino-playground -it -- /bin/bash
localhost:8090
.kubectl port-forward svc/gravitino -n gravitino-playground 8090:8090
localhost:8888
.kubectl port-forward svc/jupyternotebook -n gravitino-playground 8888:8888
./playground.sh k8s stop
docker exec -it playground-trino bash
trino@container_id:/$ trino
Open the Jupyter Notebook in the browser at http://localhost:18888.
Open the gravitino-trino-example.ipynb
notebook.
Start the notebook and run the cells.
docker exec -it playground-spark bash
spark@container_id:/$ cd /opt/spark && /bin/bash bin/spark-sql
Open the Grafana in the browser at http://localhost:13000.
In the navigation menu, click Dashboards -> Gravitino Playground.
Experiment with the default template.
You can use simple queries to test in the Trino CLI.
SHOW CATALOGS;
CREATE SCHEMA catalog_hive.company
WITH (location = 'hdfs://hive:9000/user/hive/warehouse/company.db');
SHOW CREATE SCHEMA catalog_hive.company;
CREATE TABLE catalog_hive.company.employees
(
name varchar,
salary decimal(10,2)
)
WITH (
format = 'TEXTFILE'
);
INSERT INTO catalog_hive.company.employees (name, salary) VALUES ('Sam Evans', 55000);
SELECT * FROM catalog_hive.company.employees;
SHOW SCHEMAS from catalog_hive;
DESCRIBE catalog_hive.company.employees;
SHOW TABLES from catalog_hive.company;
In a company, there may be different departments using different data stacks. In this example, the HR department uses Apache Hive to store its data, and the sales department uses PostgreSQL. You can run some interesting queries by joining the two departments' data together with Gravitino.
To know which employee has the largest sales amount, run this SQL:
SELECT given_name, family_name, job_title, sum(total_amount) AS total_sales
FROM catalog_hive.sales.sales as s,
catalog_postgres.hr.employees AS e
where s.employee_id = e.employee_id
GROUP BY given_name, family_name, job_title
ORDER BY total_sales DESC
LIMIT 1;
To know the top customers who bought the most by state, run this SQL:
SELECT customer_name, location, SUM(total_amount) AS total_spent
FROM catalog_hive.sales.sales AS s,
catalog_hive.sales.stores AS l,
catalog_hive.sales.customers AS c
WHERE s.store_id = l.store_id AND s.customer_id = c.customer_id
GROUP BY location, customer_name
ORDER BY location, SUM(total_amount) DESC;
To know the employee's average performance rating and total sales, run this SQL:
SELECT e.employee_id, given_name, family_name, AVG(rating) AS average_rating, SUM(total_amount) AS total_sales
FROM catalog_postgres.hr.employees AS e,
catalog_postgres.hr.employee_performance AS p,
catalog_hive.sales.sales AS s
WHERE e.employee_id = p.employee_id AND p.employee_id = s.employee_id
GROUP BY e.employee_id, given_name, family_name;
You might also consider generating data with SparkSQL and then querying this data using Trino. Give it a try with Gravitino:
// using Hive catalog to create Hive table
USE catalog_hive;
CREATE DATABASE product;
USE product;
CREATE TABLE IF NOT EXISTS employees (
id INT,
name STRING,
age INT
)
PARTITIONED BY (department STRING)
STORED AS PARQUET;
DESC TABLE EXTENDED employees;
INSERT OVERWRITE TABLE employees PARTITION(department='Engineering') VALUES (1, 'John Doe', 30), (2, 'Jane Smith', 28);
INSERT OVERWRITE TABLE employees PARTITION(department='Marketing') VALUES (3, 'Mike Brown', 32);
SELECT * FROM catalog_hive.product.employees WHERE department = 'Engineering';
The demo is located in the jupyter
folder, and you can open the gravitino-spark-trino-example.ipynb
demo via Jupyter Notebook by http://localhost:18888.
Suppose you want to migrate your business from Hive to Iceberg. Some tables will use Hive, and the other tables will use Iceberg. Gravitino provides an Iceberg REST catalog service, too. You can use Spark to access the REST catalog to write the table data. Then, you can use Trino to read the data from the Hive table joining the Iceberg table.
spark-defaults.conf
is as follows (It's already configured in the playground):
spark.sql.extensions org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions
spark.sql.catalog.catalog_rest org.apache.iceberg.spark.SparkCatalog
spark.sql.catalog.catalog_rest.type rest
spark.sql.catalog.catalog_rest.uri http://gravitino:9001/iceberg/
spark.locality.wait.node 0
Please note that catalog_rest
in SparkSQL and catalog_iceberg
in Gravitino and Trino share the same Iceberg JDBC backend, implying they can access the same dataset.
docker exec -it playground-spark bash
spark@container_id:/$ cd /opt/spark && /bin/bash bin/spark-sql
use catalog_rest;
create database sales;
use sales;
create table customers (customer_id int, customer_name varchar(100), customer_email varchar(100));
describe extended customers;
insert into customers (customer_id, customer_name, customer_email) values (11,'Rory Brown','rory@123.com');
insert into customers (customer_id, customer_name, customer_email) values (12,'Jerry Washington','jerry@dt.com');
docker exec -it playground-trino bash
trino@container_id:/$ trino
select * from catalog_hive.sales.customers
union
select * from catalog_iceberg.sales.customers;
The demo is located in the jupyter
folder, you can open the gravitino-spark-trino-example.ipynb
demo via Jupyter Notebook by http://localhost:18888.
The Gravitino Playground also provides a simple RAG demo with LlamaIndex. This demo will show you the the ability to use Gravitino to manage both tabular and non-tabular datasets, connecting to LlamaIndex as a unified data source, then use LlamaIndex and LLM to query both tabular and non-tabular data with one natural language query.
The demo is located in the jupyter
folder, and you can open the gravitino_llama_index_demo.ipynb
demo via Jupyter Notebook by http://localhost:18888.
The scenario of this demo is that basic structured city statistics data is stored in MySQL, and detailed city introductions are stored in PDF files. The user wants to know the answers to the cities both in the structured data and the PDF files.
In this demo, you will use Gravitino to manage the MySQL table using a relational catalog, pdf files using a fileset catalog, treating Gravitino as a unified data source for LlamaIndex to build indexes on both tabular and non-tabular data. Then you will use LLM to query the data with natural language queries.
Note: to run this demo, you need to set OPENAI_API_KEY
in the gravitino_llama_index_demo.ipynb
,
like below, OPENAI_API_BASE
is optional.
import os
os.environ["OPENAI_API_KEY"] = ""
os.environ["OPENAI_API_BASE"] = ""
Gravitino supports to provide the ability of access control for Hive tables using Ranger plugin.
For example, there are a manager and staffs in your company. Manager creates a Hive catalog and create different roles. The manager can give different roles to different staffs.
You can run the command
./playground.sh start --enable-ranger
The demo is located in the jupyter
folder, you can open the gravitino-access-control-example.ipynb
demo via Jupyter Notebook by http://localhost:18888.
Apache Gravitino is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.
Apache®, Apache Gravitino™, Apache Hive™, Apache Iceberg™, and Apache Spark™ are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.