alangan17 / video-game-analysis

Data Engineering Zoomcamp Project
0 stars 1 forks source link

Video Game Analysis

Overview

This project aims to analyze video game data to identify trends and insights. The data is extracted from RAWG using Python and stored in Google Cloud Storage (GCP). The data is then transformed using dbt Cloud and Mage. The entire process is orchestrated using Mage.

Data Source

Data is extracted from the RAWG Video Games Database API. The API provides information on video games, including game title, genre, platform, release date, and more. The API documentation can be found here.

For the data dictionary, refer to the dbt documentation (Click Sources) here

Data Architecture

data-architecture

Data Extraction and Load (Batch Ingestion)

  1. Python Script using Mage

  2. Google Cloud Storage (GCP)

Data Transformation

  1. dbt Core

Data Warehouse

  1. Google BigQuery

Data Visualization

  1. Google Looker Studio

Data Orchestration

  1. Mage

Infrastructure as Code

  1. Terraform

CI/CD

  1. GitHub Actions - to generate and host dbt documentation

Data Product

  1. End to end pipeline to extract, transform, and load video game data mage pipeline

    mage pipelien run

  2. dbt docs to view the data model and documentation dbt lineage

  3. Dashboard using Looker Studio dashboard

Setup

  1. Fork this repository
  2. Create a codespace using the forked repository

Google Cloud Setup

  1. Create a Google Cloud account and project.
  2. Install Terraform
  3. In Google Cloud Storage, create 2 buckets (one for historical data and one for the latest data).
  4. Create a service account. Required access:
    1. Storage Admin
    2. BigQuery Admin
  5. Download the service account key and save it as ./keys/gcp-creds.json

    [!IMPORTANT] Do not commit the service account key to the repository.

Compute Environment Setup

  1. Install Terraform (follow the instructions here
  2. Run the following commands to set up the GCP environment:

terraform init terraform plan terraform apply

10. Prepare config files and create directories:
```bash
bash script/00_repo_initial_setup.sh
  1. Prepare the files .env

  2. Start the docker containers:

    docker-compose up -d
  3. Open the mage application Open Mage

  4. Run the pipeline end_to_end_pipeline Run Pipeline

  5. Check the data in BigQuery

  6. Clone the dashboard in Looker Studio and connect the data source to BigQuery