GeroVanMi / algorithmic-quartet-mlops

A showcase Machine Learning Operations (MLOps) Project.
0 stars 1 forks source link

Create project and template for W&B #29

Closed vollOlga closed 4 months ago

vollOlga commented 4 months ago
vollOlga commented 4 months ago

import wandb import random

we start with the initializing of the variables:

NUMBER_OF_EPOCHS = 20 BATCH_SIZE = 32 LEARNING_RATE = 0.003

EXPERIMENT_NAME = "Experiment 01" PROJECT_NAME = "Pockemon_picture_creation_training" ENTITY_NAME = "algorithmic-quartet-zhaw" ARCHITECTURE_NAME = "UNet2DModel" DATASET_NAME = "pokemon"

MODEL_SAVE_PATH = f"../models/{ARCHITECTURENAME}{int(time.time())}.pt"

In dev mode we can define, how many epochs we want to train to see, that the model works

DEV_MODE = False

if DEV_MODE: print("RUNNING IN DEVELOPER TESTING MODE. THIS WILL NOT TRAIN THE MODEL PROPERLY.") print("To train the model, set DEV_MODE = False in run_experiment.py!") BATCH_SIZE = 3 NUMBER_OF_EPOCHS = 3 EXPERIMENT_NAME = f"DEV {EXPERIMENT_NAME}"

start a new wandb run to track this script

wandb.init( project=PROJECT_NAME, entity=ENTITY_NAME, name=EXPERIMENT_NAME, config={ "learning_rate": LEARNING_RATE, "architecture": ARCHITECTURE_NAME, "batch_size": batch_size, "dataset": DATASET_NAME, "epochs": NUMBER_OF_EPOCHS, "dev_mode": DEV_MODE, "device": device, }, )

simulate training

epochs = 10 offset = random.random() / 5 for epoch in range(2, epochs): acc = 1 - 2 -epoch - random.random() / epoch - offset #our metrics loss = 2 -epoch + random.random() / epoch + offset #our metrics

# log metrics to wandb
wandb.log({"acc": acc, "loss": loss}) #logging of our metircs

[optional] finish the wandb run, necessary in notebooks

wandb.finish()