airavata-courses / SquirtleSquad

A squad as cool as the real Squirtle Squad!. Somehow obsessed with designing a weather prediction app to save the world from global warming!!
1 stars 2 forks source link

SquirtleSquad

This project aims to build a online weather prediction hosting platform.

Architecture

For internal messaging between the microservices we have used Kafka. The frontend communicates with the API Gateway through RESTFul calls. The following features have been implemented.

Our Architecture

The User Management Pipeline:

The Session Restore Pipeline:

The Session Logging

The Model Execution Pipeline

Bugs

Requirements

To run this application, make sure you have anaconda package for python, go and node.js installed.

Packages

Installation

Follow the below instructions. Move to clone directory.

  1. Starting Kafka. Change to the directory where kafka is extracted.

    • For Zookeeper, run $ bin/zookeeper-server-start.sh config/zookeeper.properties.
    • For Kafka, run $ bin/kafka-server-start.sh config/server.properties.
    • Use the command $ bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic topic-name for the topics apigateway, DataRetrieval, modelexecution, postanalysis, addAction.
    • Use $ bin/kafka-topics.sh --list --bootstrap-server localhost:9092 to list out all the topics. Make sure all the topics mentioned above have been created.
  2. Run the API Gateway service

    • $ cd SquirtleSquad/APIGateway
    • $ npm install
    • $ npm run dev
  3. Run the User Management service in a seperate terminal.

    • $ cd ../UserManagement
    • $ npm install
    • $ npm run dev
  4. Run the Session Management service in a seperate terminal.

    • $ cd ../SessionManagement
    • $ npm install
    • $ npm run dev
  5. Run the Data Retrieval service in a seperate terminal.

    • $ cd ../DataRetrieval
    • $ go get github.com/Shopify/sarama
    • $ go build kafka-connectors/connector.go
    • $ go run kafka-connectors/connector.go
  6. Run the Model Execution service in a seperate terminal.

    • $ cd ../ModelExecution
    • $ python ModelExecution.py
  7. Run the Post Analysis service in a seperate terminal.

    • $ cd ../Inference
    • $ python Inference.py

Once everything starts running up, the application can be accessed using localhost:8080/.

NOTE :

Team Members:

Anurag Kumar
Sathyan Venkatanarayanan
Shanmukha Surapuraju